Research

The influence of digital products on our everyday life is continuously growing. Smartphones, tablets, and other smart devices are ever-present. They become smarter, more personalized, and more capable from generation to generation. Considering the ever-growing system complexity and the increasing expectations toward these devices, a traditional qualitative product design approach is no longer sufficient to fulfill customers’ needs. However, data-driven and AI-powered approaches can effectively reduce the effort of extensive user studies and expensive simulations. Machine learning-based user modeling and big data analytics allow for insights that go beyond traditional UX research approaches. Our research aims to help designers, researchers, and decision-makers to better understand how users interact with digital devices in their real-world context. This question is not only crucial for developing systems that are easy and safe to use but also key for business decisions throughout the product development process.

Data-Driven Driver Modeling

The goal of CIAOs research is to better understand how drivers allocate their resources while engaging in secondary tasks. A deep understanding of human multitasking and how distraction affects driving behavior and vice versa facilitates the design of safe automotive HMIs. We combine large-scale user interaction data with driving data and glance behavior data to model driver’s distraction, behavioral adaption, and sensitivity to the driving context. While we apply statistical modeling to evaluate new design artifacts, we also apply various machine learning approaches to predict human interaction behavior to not only better understand the interaction itself but also to evaluate new interfaces already before the first user study is run.

Computationally Rational User Models

Computationally rational theories assume that users choose their behavior to maximize their expected utility, given their bounds. Applied to human-computer interaction, users interact with technology so that they can achieve an optimal outcome given their internal (e.g., cognition or perception) and external (e.g., environment or design of a tool) bounds. Thus, users adapt their behavioral policies or strategies to maximize their utility. These policies can be approximated via reinforcement learning, yielding verifiable predictions of user interaction behavior. Although machine learning is used to learn behavioral policies and make predictions, the formulated bounds (e.g., perceptual or motor constraints) limit the space of computable interaction strategies of the agent so that the interactions represent realistic human behavior.

Mixed-Reality Driving Simulator

To evaluate the computational models with human participants we to aim mimic the real driving environment as closely as possible. Our mixed-reality driving simulator allows us to create an immersive driving and interaction experience. We exploit the immersiveness of virtual reality to simulate the driving environment but keep the vehicle interior real such that secondary task engagements (e.g., interactions with the center stack touchscreen) feel as real as possible.