We work on two major topics:
- Machine Learning Tools for Animal Behavior Analysis – We strive to develop computer vision and machine learning tools for the analysis and quantification of animal behavior. Published work in this field includes DeepLabCut, a popular open-source software tool for pose estimation.
- Modeling of Sensorimotor Learning and Control – We develop normative theories of neural systems that are trained to perform sensorimotor behaviors as well as task-driven models (e.g. DeepDraw). Furthermore, we will compare and contrast those with data from mice, and primates incl. humans performing motor skills.
Feel free to propose projects in the scope of those topics; specific open projects are listed below. These projects can be made suitable for bachelor projects, lab immersions and master’s projects.
Furthermore, together with Prof. Mackenzie Mathis’ lab we are actively developing DeepLabCut. There are multiple projects available ranging from developing new features (e.g. improved active learning, novel GUI features), to software engineering (profiling and unit-tests).
Transformers for pose tracking
Multi-human pose estimation from a single image, the task of predicting the body part locations for each individual, is an important computer vision problem. Pose estimation has wide ranging applications from measuring behavior in healthcare and biology to virtual reality and human-computer interactions (Mathis et al, Neuron 2020).
We recently developed POET, a transformer-based model that is end-to-end trainable on multi-instance pose estimation and are interested in thinking with various aspects of this model (efficient Transformers, pre-trained Vision Transformers, hierarchical pose representation, etc.). Furthermore, we seek to investigate the usage of such transformer-based architectures for robust tracking in videos.
Naturalistic human hand-movement analysis
Our hands are versatile tools supporting us in everyday behaviors from typing tweets to falling trees. Analyzing the functionality of human hands is important for many applications from gesture recognition, rehabilitation as well as neural prostheses. This project seeks to leverage state-of-the-art computer vision to analyze 3D human hand interactions in everyday object interactions from egocentric video recordings of naturalistic behavior.
Proprioceptive illusion project
Biological motor control is versatile and efficient. Muscles are flexible and undergo continuous changes requiring distributed adaptive control mechanisms. How proprioception solves this problem in the brain is still unknown. Here, we pursue a task-driven modeling approach, which was very successful in sensory systems such as vision, hearing and thermoception, to gain insights into the proprioceptive system. In particular, we seek to test if task-driven models, such as DeepDraw, a hierarchical model of the proprioceptive system, are susceptible to proprioceptive illusions thereby reflecting similar properties of the corresponding biological system.
Learning adaptive behavioral strategies through competition
The ability to adapt to a changing environment is a desirable feature of an autonomous agent and a hallmark of intelligent behavior in animals. In the literature we can find different approaches to the solution of this problem, including strategies to adapt at test time and few-shot adaptation via meta-learning.
A way to enable an agent to perform well in a perturbed environment is to train it on different environments and thus achieve more robust (adapted) policies. Randomly choosing a perturbation from a distribution, however, may or may not be a viable way to do so, as the resulting environment might be too different from the base one and therefore too challenging for the agent.
To tackle this problem, in the context of this project, we will explore unsupervised environment design to automatically generate a curriculum of perturbations, leading the agent to progressively adapt to a more and more diverse set of environments. Your task will be to design an interesting training environment and perturbation set, train an RL-agent and assess its test performance in these environments.
Please reach out to [email protected] with your CV and letter of motivation, if you are interested!