User Experience Design – Voice Activation
User experience design as an industry standard practice has evolved greatly since the late 1970’s in software and product development. Today, it is an understatement to say it has become one the most important and talked about industry practices within a corporate, startup or creative company structure. UX in product development, software, mobile and now wearable’s to a large degree is only just getting started. As a practice, it is set to explode. There has never been a better time to be a product designer than right now.
With the emphasis now being placed more and more on UX in product development, there seems to be a gap developing within the practice that is not being discussed or considered in a product’s lifecycle as much as it should be, voice control (sound in UX) in mobile and wearable devices.
Recent products over the past few years that integrate user voice control methods are Apple – Siri, Windows – Cortana, and of course now Google Glass. Wearable products, i.e. fitness bracelets, synced watches and face device attachments such as Glass are quickly becoming mainstream consumer communication devices. These device types attempt to make our lives less cluttered and more seamless in completing tasks such as, sending messages, emails, taking a photo or tracking heart rate. Wearable product user interactions will begin to focus more and more on sound activation tasks completed by the user with some ‘swipe’ or ‘tap’ intuitive gesture participation. Growth in voice control (sound UX) paradigm activation is set to gain momentum fast and designers need to bring these new paradigms into their daily thinking.
Common / typical user interaction flows on mobile and tablet devices, i.e. sending a message, searching the web, maps control, shooting video, will soon be more commonly controlled by voice activation. With this is in mind, many different variables come to the fore for consideration for designers. These typical everyday user interactions will need to be re-thought using voice control, i.e. considerations for user environments, noise level and length of interactions will all need to be considered. User interactions will need to be snappier, quicker, no time for error by the user and function correctly on first time request / activation. Typical everyday user interactions will now need to be rethought and stripped down as bear minimum microinteractions on wearable devices, with little room for error as possible.
Some other thoughts around these new growing paradigms begin to set a theme of concern also. In the next 8 – 12 years, do you really want to listen to individuals shout at their devices while walking in the street or cueing in a Starbucks line? I know I don’t. We may start to look a little crazy. If voice control paradigms in wearable device interactions is to grow, the UX / interaction design process within this vertical will have lots of exciting and major discovery path’s ahead of itself.
Long live the overweight brick we haul daily and let’s be careful not to create anymore Glasshole types.