Research Overview
The AIT lab conducts research at the fore-front of human-centric computer vision.
Our core research interests are algorithms and methods for the spatio-temporal understanding of how humans move within and interact with the physical world.
We develop learning-based algorithms, methods and representations for human- and interaction-centric understanding of our world from videos, images and other sensor data.
Application domains of interest include Augmented and Virtual Reality, Human Robot Interaction and more.
Please refer to our publications for more information.
The AIT Lab, led by Prof. Dr. Otmar Hilliges ,
is part of the Institute for Intelligent Interactive Systems (IIS) ,
in the Department of Computer Science
at ETH Zurich .
Latest News
27.02.2023 We have 12 papers accepted at CVPR 2023 . Stay tuned for more details.
01.09.2022 Dr. Christoph Gebhardt joins the AIT lab as a post-doc. Welcome back!
06.09.2022 Markos Diomataris joins the AIT lab. Welcome!
01.07.2022 Mert Albaba joins the AIT lab. Welcome!
09.08.2022 We have 1 paper accepted at TMLR . Stay tuned for more details.
09.08.2022 We have 2 papers accepted at 3DV 2022 . Stay tuned for more details.
01.07.2022 We have 1 paper accepted at ECCV 2022 . Stay tuned for more details.
09.05.2022 We have 1 paper accepted at SIGGRAPH 2022 . Stay tuned for more details.
01.05.2022 Hsuan-I Ho and Artur Grigorev join the AIT lab. Welcome!
Check here for previous news.
Latest Projects
X-Avatar: Expressive Human Avatars
Learning Locally Editable Virtual Humans
Hi4D: 4D Instance Segmentation of Close Human Interaction
InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds
Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition
HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics
DINER: (D)epth-aware (I)mage-based (Ne)ural (R)adiance Fields
PointAvatar: Deformable Point-based Head Avatars from Videos Authors Y. Zheng ,
W. Yifan ,
G. Wetzstein ,
M. Black ,
O. Hilliges In Proceedings Computer Vision and Pattern Recognition (CVPR), 2023
HARP: Personalized Hand Reconstruction from a Monocular RGB Video
Learning Human-to-Robot Handovers from Point Clouds
ARCTIC: A Dataset for Dexterous Bimanual Hand-Object Manipulation
LiP-Flow: Learning Inference-time Priors for Codec Avatars via Normalizing Flows in Latent Space
Computational Design of Active Kinesthetic Garments
Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors
TempCLR: Reconstructing Hands via Time-Coherent Contrastive Learning
SFP: State-free Priors for Exploration in Off-Policy Reinforcement Learning
EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes
D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object Interactions
gDNA: Towards Generative Detailed Neural Avatars
I M Avatar: Implicit Morphable Head Avatars from Videos
PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D Video Sequence
Computational Design of Kinesthetic Garments
Human Performance Capture from Monocular Video in the Wild
A Skeleton-Driven Neural Occupancy Representation for Articulated Hands
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-pixel Part Segmentation
A Spatio-temporal Transformer for 3D Human Motion Prediction
No .bib file exists at /people/buehler/public/varitex/varitex2021.bib!!!
EM-POSE: 3D Human Pose Estimation from Sparse Electromagnetic Trackers
SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural Implicit Shapes
Shape-aware Multi-Person Pose Estimation from Multi-view Images
PeCLR: Self-Supervised 3D Hand Pose Estimation from monocular RGB via Equivariant Contrastive Learning
SPEC: Seeing People in the wild with an Estimated Camera
PARE: Part Attention Regressor for 3D Human Body Estimation