Type of Thesis: Master’s thesis
Supervisor: Michal Miazga
Description Typing on smartphones and tablets is one of the most frequent forms of daily interaction, yet most research focuses on single-thumb typing. In practice, many users rely on two hands, introducing complex coordination between fingers, wrists, and posture. These motor patterns vary widely across users and can significantly influence typing speed, comfort, and error rates.
The primary goal of this work is to develop biomechanical user models that simulate two-handed typing behavior on touchscreen devices under human physiological constraints. The project will build on the existing BimanualMuscle body and extend it to coordinated bimanual interaction. These models will be grounded in computational rationality, assuming that users type in ways that optimize performance given their motor abilities, cognitive load, and task context. Reinforcement learning will be used to train agents to type on virtual keyboards while respecting biomechanical constraints such as finger reach, movement cost, and joint limits. The results can contribute to improved keyboard layouts and adaptive typing interfaces that better support diverse users.
Related Literature
Type of Thesis: Master’s thesis
Supervisor: Michal Miazga
Description Physical keyboards remain central to productivity tasks, yet typing behavior varies significantly across individuals depending on hand anatomy, skill level, and learned motor patterns. Understanding how users coordinate both hands during typing can significantly influence typing speed, comfort, and error rates. The primary goal of this work is to develop biomechanical user models that simulate two-handed typing on physical keyboards within realistic human motor constraints. The project will use the existing BimanualMuscle body to interact with a virtual keyboard environment. Based on computational rationality, the models will assume that users adopt typing strategies that optimize speed and accuracy while minimizing physical effort. Reinforcement learning will be used to train agents to perform typing tasks using coordinated finger movements across both hands. The resulting models can help evaluate keyboard layouts, typing techniques, and assistive technologies, supporting more inclusive and efficient input device design.
Related Literature