Our research interests are centered around the computational aspects of Human-Computer-Interaction (HCI). As technologies shift computing away from the desktop, it is becoming increasingly clear that traditional means of interaction need to be replaced or complemented with new forms of input and output. The main objective is to push the boundaries of what humans can do with computers and how they interact with them.
Concretely, we research machine perception of human activity, including pose estimation, gesture recognition and other forms of input sensing in order to build models of human behavior. Such models can then be used to provide context sensitive and intelligent feedback and to optimize presentation of information.
Specifically, we are interested in algorithms that can extract high-level concepts such as style and semantic meaning from observations of human activities and algorithms that continuously update individualized user models (based on such data). Finally, we investigate how mathematical optimization techniques can be used to synthesize and optimize user interfaces, at design- and run-time. For example, finding optimal parametrizations of sensor-based interactive devices for a given input task or finding optimized ways to display information given the systems current best believe about the user's intention, attention and other states.
We apply this algorithmic work in many different domains such as Augmented and Virtual Reality, Human Robot Interaction and Smart Environments and more.
Input Recognition & Motion Analysis
Machine Learning for Interaction
Computational Design of Interactive Technologies