top of page
Group 13.1.png

Voice User Interface
in Autonomous Vehicles

Since the invention of computers, the norm of human-computer interaction has gone through various stages: from keyboard-mouse interaction to touch screen, to name a few. Meanwhile, user interface has undergone tremendous changes in the recent years. Voice interface is gradually replacing Graphic User Interface, and quickly becoming a common part of in-vehicle experiences.

Products that use voice as the primary interface are becoming popular by the day and the number of users has continued to grow. This case study explores the the application of Voice Interface in automotive field.
Group 23.png
Rectangle.png

Voice input as a form of human-computer interaction is intuitive for users. Users are not limited to specific grammatical rules when they interact with the system, and they can construct the inputs in different ways just like how they do so in conversations with humans.

This case study explores the use of voice interface in the context of autonomous driving.


Benefits of Voice User Interface:

○ Fewer screens -> less interaction costs
○ Free up users’ hands
○ More emotional and personalized
○ Accessibility: good for people with poor eyesight

Group 7.1.png
Rectangle 8.1.png
Group 24.png

At the core of VUI lies people’s language ability.


Key takeaway is that we should not separate speech from UI design. Just like when we are at a live music show, all five of our senses work together.
 

When and where to apply GUI or VUI really depends. Some information is easier to be processed when we see it. In other cases, VUI is more suitable.

 

Here are a few examples of instances:
GUI: When we show a long list of option. Charts with large amount of data. Product information and product comparison.
VUI: Simple command and user instructions. Warning and notifications.

 

Rectangle 9.png
Rectangle 8.1.png


At the end of the day, designers are tasked to bridge technology and users. We design products to meet the needs of the users and the user-centric perspective will not change in design of VUI.

From a technical standpoint, speech recognition technology converts user’s speech into text. After that, computer processes and understands the text through segmentation and parsing, and then triggers actions and sends feedback back to users.

According to the technical framework, designer’s should analyze user intent and design conversational experience via scripts. But the challenge is that association between user’s language and their intent is not always straightforward.

 


How do we analyze user’s intentions in VUI?

Group 25.png
Group 22.png
Rectangle 8.1.png

Conversational Structure of Natural Language




Users will only think that computers understand what they mean by getting the feedback that meet their expectations.

Complete conversational structure in natural language has a ‘start module’ and an ‘end module’ with ‘nodes in between topics.


 

Rectangle 8.1.png

Analyze User Intent using Replacement




We can derive a variety of user needs and responses by dissecting a fairly general user intent into components, and re-combining the components to derive a series of complex user needs.

Use this case of autonomous driving as an example. Suppose users want to ask about weather in the car by striking up a conversation. Users ask questions about the weather not only to ontain information about the weather, but we should be able to expand the topic and add dimensions to the conversation with related topics like safety, travel, health, food, mood etc.

Adobe XD is a powerful prototyping tool for voice interface. When users ariticular a particular word or phrase, the utterance triggers speech-to-text engine, and prototypes will react with words/sentence defined by the designer.





 

rtysry.png
bottom of page