Visual Intelligence for Transportation VITA
Our research enables a future where self-driving vehicles, or delivery/social robots will be facts of everyday life. Beyond embodied agents, we will also enable our living spaces – our homes, terminals, and cities – become equipped with ambient intelligence which can sense and respond to human behavior.
Our laboratory pushes the limits of Artificial Intelligence (AI) in the context of transportation, mobility, and built environments.
Self-driving vehicles or delivery robots need to navigate crowded social scenes in close proximity with humans. Hence, they must understand social conventions, ethics and obey unwritten common-sense rules. Humans have an innate ability to “read” the behavior of others before making decisions. A machine should have the same capability to share the space with humans in a safe, efficient, and trustworthy manner.
To address this grand challenge of co-existence in the “last mile mobility”, we propose a new type of AI we call socially-aware AI: perception and planning augmented with social intelligence. In other words, it is the ability to effectively perceive, navigate, and negotiate complex social interactions and environments.
Technically, our research brings together Computer Vision (Real-time Perception), Machine Learning (Deep learning) and Robotics (Crowd-Robot Interaction) to understand human behavior at every scale (enabling Autonomous Moving Agents and Digital Twins).
We have open positions for Ph.D. students and Postdocs (with an expected background in Deep Learning).
Feel free to send your application to [email protected]
Our research is centered around understanding and predicting human social behavior with multi-modal visual data. Our work spans multiple aspects of socially-aware systems:
- From collecting multi-modal data at scale,
- Extracting coarse-to-fine grained behaviors in real-time,
- Designing deep learning methods that can learn to predict human social behavior in a fully data-driven way,
- To integrating the developed methods in real-world systems such as a vehicle or a socially-aware robot that navigates crowded social scenes.
The latest news
Robots programmed to follow you
Summer series – student projects – Two dozen EPFL Master’s students recently took part in a unique race where the competitors were tandems of students and robots. This challenge put the students’ programming skills to the test, in pursuit of enhanced methods of human-machine interaction.
Combien de manifestants à Genève pour la grève des femmes?
| news 05:30 • 01 juillet 2019 Analyse par l'algorithme sur la base d'une photo de la manifestation genevoise prise par le photographe Demir Sönmez à la rue de la Confédération. Genève ne sait toujours pas combien de personnes ont manifesté le 14 juin dernier. Nous avons donc soumis les images et vidéos récoltées ce jour-là au laboratoire VITA de l’EPFL, qui a développé un algorithme d’intelligence artificielle nommé PifPaf. Ce dernier parvient à un résultat estimé six fois supérieur à celui de la police.