Machine Perception - SS 23

Recent developments in neural networks (aka “deep learning”) have drastically advanced the performance of machine perception systems in a variety of areas including computer vision, robotics, and human shape modeling This course is a deep dive into deep learning algorithms and architectures with applications to a variety of perceptual and generative tasks.


eDoz Course Nr.
O. Hilliges J. Song, F. Engelmann, X. Chen,
M. Bühler, S. Christen, Z. Fan, M. Kaufmann, M. Albaba, A. Grigorev, C. Guo, H. Ho
Wed 13:15 - 14:00 (HG F 1)
Thu 12:15 - 14:00 (HG F 1)
Thu 14:15 - 16:00 (CAB G 11)
Fri 14:15 - 16:00 (CAB G 11)
Written Exam, Wednesday, June 7th, 13:30-16:30
All recordings will be made available on the ETH Video Portal.
Please post all questions (regarding content, organization etc.) on Moodle.


The lectures this week are cancelled. Please use last years recordings (part I and part II) to study the material. The tutorial will still take place on Thursday.
Project descriptions have been added here!
More info coming soon!

Learning Objectives

Students will learn about fundamental aspects of modern deep learning approaches for perception and generation. Students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in learning-based computer vision, robotics, and shape modeling. The optional final project assignment will involve training a complex neural network architecture and applying it to a real-world dataset.

The core competency acquired through this course is a solid foundation in deep-learning algorithms to process and interpret human-centric signals. In particular, students should be able to develop systems that deal with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, performing action/gesture recognition from still images or image sequences, also considering multi-modal data, among others.

We will focus on teaching: how to set up the problem of machine perception, the learning algorithms, network architectures, and advanced deep learning concepts in particular probabilistic deep learning models.
The course covers the following main areas:
I) Foundations of deep learning.
II) Advanced topics like probabilistic generative modeling of data (latent variable models, generative adversarial networks, auto-regressive models, invertible neural networks).
III) Deep learning in computer vision, human-computer interaction, and robotics.


Subject to change. Materials only available from within ETH network.

Wk.Date ContentMaterial Exercise Session
1 22.02
Deep Learning Introduction

Class content & admin

1 23.02
-- No Class --
2 01.03
Training Neural Networks

Feedforward Networks,
Representation Learning

slides pt. I

slides pt. II
Perceptron Visualization Notebook

Tutorial Implement your own MLP

XOR Notebook
XOR Solutions
Eye-Gaze Notebook
Eye-Gaze Solutions

Tutorial Linear Regr.

Linear Regression Notebook

Pen & Paper Backprop.

exercise solution
3 09.03.
Convolutional Neural Networks

slides pt. I
slides pt. II

Additional material:
Cortical Neuron -->

Tutorial CNNs in Pytorch

CNN Notebook

Pen & Paper CNN

exercise solution
4 15.03.
Fully Convolutional Neural Networks
4 16.03.
Recurrent Neural Networks

LSTM, GRU, Backpropagation through time


Tutorial RNNs in Pytorch

RNN Notebook

Pen & Paper RNN

exercise solution
5 23.03.
Generative Models Pt. I: Latent Variable Models

Variational Autoencoders, etc.

slides pt. I
slides pt. II

Class Tips for Training I


Pen & Paper VAE

exercise solution
6 30.03.
Generative Models Pt. II: Autoregressive Models

PixelCNN, PixelRNN, WaveNet, Stochastic RNNs

slides pt. I
slides pt. II

Class Tips for Training II


Pen & Paper AR


Exercise Sessions

Please refer to the above schedule once available for an overview of the planned exercise slots. We will have three different types of activities in the exercise sessions:

  1. Tutorial: Interactive programming tutorial in Python taught by a TA. Code will be made available.
  2. Class: Lecture-style class taught by a TA to give you some tips on how to train your neural network in practice.
  3. Pen & Paper: Pen & paper exercises that are not graded but are helpful to prepare for the written exam. Solutions will be published on the website a week after the release and discussed in the exercise session if desired.



There will be a multi-week project that gives you the opportunity to have some hands-on experience with training a neural network for a concrete application.

The project grade will be determined by two factors: 1) a competitive part based on how well your model fairs compared to your fellow students' models and 2) the idea/novelty/innovativeness of your approach based on a written report to be handed in by the project deadline. For each project there will be baselines available that guarantee a certain grade for the competitive part if you surpass them. The competition will be hosted on a online platform - more details will be announced here.

Check out the project descriptions here (you will need to log in with your ETH LDAP).

Registration as Non-primary Target Group

Registrations have been closed.