This is the first in a series of blog posts on Intel research into human-machine interfaces (HMIs) for automated driving.
As Intel’s principal engineer and chief systems engineer for automated driving solutions, I’ve been working on some exciting research in the realm of automated vehicle technology. I’d like to share my perspective with you on some of our initial findings. In doing so, my hope is that it may expand your thinking about the road to autonomous driving that lies head.
If there’s one thing I’ve learned about our relationship to self-driving technology so far it’s this: Getting consumers to trust the technology is just as important as the tech itself. We do this through the devices and screens in the vehicle, which are known as human-machine interfaces (HMIs). We also do this through what’s known as trust interactions, which are actions that instill confidence, control and a sense of safety when operating or riding in an automated vehicle.
We are still in the early stages of our prototyping and testing efforts, but a few key findings have surfaced that, from my perspective, are likely to be critical to consumer adoption of automated vehicles. The following are what I see as four capabilities at the heart of effective trust interactions.
One key aspect of establishing trust with the physical operation of automated vehicles is that passengers must be able to understand what the automated vehicle system is sensing. For example, many participants have noted that when the system includes a visual display of a pedestrian crossing the street, which corresponds to the pedestrian they see from the car window, confidence in the system is established.
We are also learning that sensing is just as important inside the vehicle. Knowing the number and location of passengers, and their personal items can better facilitate the display of en route trip information, or alert a passenger if an item is left behind when he or she exits the vehicle.
Clear and Varied Communication
Our research shows that the amount of information passengers want depends entirely on the situation. Overcommunicating is desired when encountering road construction so that a different route can be quickly determined is desired; but when at a stoplight, “don’t show or tell me every little thing”. Communication is a balancing act. The system must communicate in a flexible manner, providing more or less information based on different preferences and contexts.
Communicating in a variety of ways is also important. Voice interactions, larger screens, smaller touchscreens and passengers’ own mobile devices offer a variety of ways for passengers to notice and understand information. This is particularly important because passenger attention is likely to be focused on other activities when driving is no longer necessary. A variety of communication methods is also crucial to accommodate disabled passengers who may struggle with, or be unable to use, more standard interfaces such as touchscreens or voice-activated controls.
Fast and Predictable Automated Responses
There’s no doubt that automated vehicle systems must respond quickly and make changes effectively, based on a variety of different passenger inputs. If an automated vehicle system has slow, complicated, or imprecise responses to inputs, it will be thought to have problems or errors, just like when a computer loads a web page slowly. Passengers must feel confident that the system is completing an interaction, and trust that the system understands and is capable of carrying out what has been asked of it.
In emergency situations, the automated vehicle will very likely have to respond in a sudden or erratic manner to avoid a collision. In these cases, it is crucial that the system provides the appropriate context for what just happened, and even additional context for what should or will be happening next. For example, if there is a problem with the vehicle’s operation that requires it to pull to the side of the road, the system would communicate why it has pulled over and explain what other actions are being taken (emergency road services are being contacted) and what the passenger should do next (exit the vehicle, stay away at a safe distance, and wait for a replacement automated vehicle to arrive).
Multiple Modes of Interaction
Finally, in our research and testing, we often observe participants starting a trip in one way (speaking the destination of where he or she wants to go) then shifting to other modes during the trip (using the touchscreen to find and select an additional stop along the route).
Multiple modes of interaction are also necessary because during a trip one or more modes may already be in use when interacting with the system is required. For example, a passenger may request and initiate a trip using his or her mobile device. But once the trip begins, the mobile device may be used to make a phone call, so the passenger may use a touchscreen for further inputs while still on the phone. In addition, multiple interaction modes will also be important when several passengers are in the vehicle; a voice interface may be a primary way to interact for a single passenger, but less practical when four passengers are sharing a vehicle.
As our research is showing, we have a tremendously exciting opportunity to enhance passenger experiences as we head down the road toward fully automated vehicles. Designing and implementing trust interactions — those interactions that engender confidence, control and a sense of safety — is crucial if we intend to one day step into an automated vehicle and find our ultimate road trip buddy there waiting for us, ready to ride.
To learn more about the road ahead for automated vehicles, visit intel.com/automotive. For more on Intel IoT developments, subscribe to our RSS feed for email notifications of blog updates, or visit intel.com/IoT, LinkedIn, Facebook and Twitter.