Shaping Mercedes-Benz’s future driver AI-assistant through user insights

Business Challenge

Mercedes-Benz is exploring the future of driving with a proactive AI assistant that anticipates driver needs and takes action without input. Understanding how to build user trust and deliver user value at scale is key to the success of this product. A critical question to address during the early stages of development, which has not been covered by existing research, is whether drivers want explanations for the assistant’s actions. For my Master’s degree dissertation, I led a study to answer Mercedes’s questions, generating insights to guide product design and accelerate user adoption.

Research Questions

  1. How well can drivers grasp what triggered a proactive assistant’s action?

  2. How does being informed of the trigger behind a proactive action impact drivers’ experience with the assistant?

Approach

Both quantitative and qualitative methods could have worked for this study. I chose a quantitative approach because it allowed me to compare results between different groups and get insights that could apply to a larger audience. A large number of responses is required to do this, so I ran an online survey experiment using videos and added a few open-ended questions to capture context behind the numbers.

141 driver responses

2 use cases tested

Music use case, with explanation (built with Unity)

22-page dissertation

The videos in the survey depicted 2 use cases of the AI assistant: lowering the music when there’s a conversation going on inside the car, and navigating to a restaurant when the time reached noon. The respondents were split into two groups — one saw the videos with explanations of what triggered the AI assistant to act, while the other saw videos without explanations. Only the latter group answered open-ended questions where they had to share what they believed to be the action that the AI assistant took, as well as what triggered the action.

I’d recommend watching the videos below in full-screen, as the respondents did.

Navigation use case, with explanation (built with Unity)

Respondent survey flow

After cleaning the data, I analyzed the quantitative data using statistical packages on Python and R, and manually coded the open-ended responses.

Key Learnings

1 |

2 |

Users can identify what triggers a proactive action, but it depends on the salience of the trigger and the action. In the music case, 83% identified the trigger (vs. 42% for navigation), likely because the volume change was central and audible, while navigation cues were small and placed in screen corners.

Drivers reported better experiences without explanations (+0.27 vs. -0.13 trust change from baseline), yet 98% still wanted them, noting they could improve trust. Prior work suggests that explanations that can conflict with user expectations can instead harm experience.

Clear onboarding and tutorials can help drivers establish accurate expectations of how the assistant works prior to use, ultimately building trust, improving adoption, and creating a foundation for scaling proactive AI features.

3 |

Finding

Drivers had a better experience with music (+0.15 vs. -0.17 satisfaction change from baseline) and found it easy to understand, while navigation was harder to interpret and raised concerns about privacy.

Design Implication

Visual design should clearly highlight both trigger and action, with additional modalities (e.g., audio) further supporting comprehension and the overall user experience.

The AI assistant in these videos acted autonomously, but the differing reactions to music vs. navigation suggest that proactivity should adapt to context (e.g., first asking the driver, rather than autonomously executing).

Impact

The findings aligned with Mercedes-Benz’s hypotheses, and has led to plans with UCL to scale my study approach across more use cases.

This study also highlighted to the Mercedes-Benz team the complex interplay between context and trigger, proactivity level, and explanation design. It’s driving a greater workstream to determine how to predict whether users can grasp the trigger and therefore when explanations are necessary, ultimately improving user trust and adoption.

Reflection

Due to the time constraints of my Master’s program, I was not able to conduct a qualitative follow-up with driving simulator sessions and interviews to observe driver behavior and explore what makes an explanation effective.

This project contributed to my growth in two ways:

1 | UXR for AI

Given the rise of AI, I wanted to return to academia to learn how to conduct user research for AI-powered tools. From this study and other projects I’ve led in the program, I’ve learned that:

  1. Longitudinal methods (e.g., diaries) are essential to capture how users learn and integrate the AI tool, as usage of the tool changes over time.

  2. Testing across multiple contexts is crucial as user experience is highly context-dependent; this also impacts where and how to provide explanations

2 | Learning Mindset

Another reason why I wanted to return to school is to challenge myself. Choosing a project with a heavy technical component, where I needed to learn Unity from scratch and code in C#, an entirely new language for me, was a big step outside my comfort zone. I proved to myself that I’m able to learn anything I put my mind to, and it reinvigorated my desire to learn.