Research Overview
The AIT lab conducts research at the fore-front of human-centric computer vision.
Our core research interests are algorithms and methods for the spatio-temporal understanding of how humans move within and interact with the physical world.
We develop learning-based algorithms, methods and representations for human- and interaction-centric understanding of our world from videos, images and other sensor data.
Application domains of interest include Augmented and Virtual Reality, Human Robot Interaction and more.
Please refer to our publications for more information.
The AIT Lab, led by Prof. Dr. Otmar Hilliges ,
is part of the Institute for Intelligent Interactive Systems (IIS) ,
in the Department of Computer Science
at ETH Zurich .
Latest Projects
EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes
D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object Interactions
gDNA: Towards Generative Detailed Neural Avatars
I M Avatar: Implicit Morphable Head Avatars from Videos
PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D Video Sequence
Computational Design of Kinesthetic Garments
Human Performance Capture from Monocular Video in the Wild
A Skeleton-Driven Neural Occupancy Representation for Articulated Hands
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-pixel Part Segmentation
A Spatio-temporal Transformer for 3D Human Motion Prediction
VariTex: Variational Neural Face Textures
EM-POSE: 3D Human Pose Estimation from Sparse Electromagnetic Trackers
SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes
Shape-aware Multi-Person Pose Estimation from Multi-view Images
Self-Supervised 3D Hand Pose Estimation from monocular RGB via Contrastive Learning
SPEC: Seeing People in the wild with an Estimated Camera
PARE: Part Attention Regressor for 3D Human Body Estimation
Hedgehog: Handheld Spherical Pin Array based on a CentralElectromagnetic Actuator
Learning Functionally Decomposed Hierarchies for Continuous Control Tasks With Path Planning
Improved Learning of Robot Manipulation Tasks via Tactile Intrinsic Motivation
Optimization-based User Support for Cinematographic Quadrotor Camera Target Framing
Hierarchical Reinforcement Learning Explains Task Interleaving Behavior
CoSE: Compositional Stroke Embeddings
Self-Learning Transformations for Improving Gaze and Head Redirection
Spatial Attention Improves Iterative 6D Object Pose Estimation
Convolutional Autoencoders for Human Motion Infilling
Omni: Volumetric Sensing and Actuation of Passive Magnetic Tools for Dynamic Haptic Feedback
Optimal Control for Electromagnetic Haptic Guidance Systems
Learning-based Region Selection for End-to-End Gaze Estimation
Human Body Model Fitting by Learned Gradient Descent
Category Level Object Pose Estimation via Neural Analysis-by-Synthesis
Weakly Supervised 3D Hand Pose Estimation via Biomechanical Constraints
ETH-XGaze: A Large Scale Dataset for Gaze Estimation under Extreme Head Pose and Gaze Variation
Towards End-to-end Video-based Eye-Tracking
Contact-free Nonplanar Haptics with a Spherical Electromagnet
Accurate Real-time 3D Gaze Tracking Using a Lightweight Eyeball Calibration
Learning to Assemble: Estimating 6D Poses for Robotic Object-Object Manipulation