For an updated list of semester projects, please follow this link.

Our VITA laboratory is interested in democratizing robots that will co-exist with humans in a safe, efficient, trustworthy, and socially-aware way. Self-driving cars, delivery robots, or social robots are examples of such robots. To realize this future, we propose empowering robots with a type of cognition we call socially-aware AI, i.e., robots that can not only perceive human behavior but reason with social intelligence – the ability to make safe and consistent decisions in unconstrained crowded social scenes.

Our research tackles the 3 Pillars of a socially-aware AI system tailored for transportation applications:

Perception, Prediction, and Planning.


Ongoing projects

Pedestrian Stop and Go

Predicting whether pedestrians will stop walking (Stop) or start to walk (Go) in the near future, for better trajectory prediction around road traffic.

Skeleton-based Action Recognition

Recognizing the actions performed by humans from their skeletons.

WholeBody Human Pose Estimation: Keypoint Communities

We propose a principled way to quantify the independence of semantic keypoints. Through training a pose estimation algorithm with a weigthed objective, we are able to predict fine-grained human and car poses.

Detecting 32 Pedestrian Attributes for Autonomous Vehicles

Joint pedestrian detection and attribute recognition with fields and Multi-Task Learning.

Motion Representations: Social NCE

Contrastive learning with negative data augmentation for robust motion representations.

2D Perception: Human Pose Estimation

Multi-person human pose estimation that is particularly well suited for urban mobility such as self-driving cars and delivery robots.

Generative models

We propose a collaborative sampling scheme between the generator and discriminator for improved data generation. Guided by the discriminator, our approach refines generated samples through gradient-based optimization in the data (or feature / latent) space, shifting the generator distribution closer to the real data distribution.

Planning: Crowd-Robot Interaction

Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. In this project, we want to go beyond first-order Human-Robot interaction and more explicitly model Crowd-Robot Interaction (CRI). More details here.

2D Perception: Traffic Perception

Adapting fields for detection from aerial images

3D Human Detection

We have been exploring how to detect humans in the 3D space only using cameras, which are cheap, reliable and ubiquitous. Our major applications are autonomous vehicles and delivery robots. We focused on challenging cases (the long tail) and uncertainty estimation to improve the reliability of autonomous systems.

Prediction: Human Trajectory Forecasting in Crowds

We present an in-depth analysis of existing deep learning based methods for modelling social interactions in crowds. To objectively compare the performance of these interaction-based forecasting models, we develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.

Discrete Choice Models and Neural Networks

We introduce a new framework which integrates representation learning in discrete choice models (DCM). We preserve the model’s high interpretability obtained by DCM’s strong theoretical foundation, while augmenting the model’s predictive performances through data-driven methods such as Neural Networks.

2D Perception: Visual Re-Identification

Improved visual re-identification which deals with the uncertainty of a model.

Vehicle Trajectory Prediction

We propose “generalizable” and “accurate” methods to predict vehicle trajectories for autonomous vehicles.

S-ATTack: Analyzing Trajectory Prediction Models

We introduce a socially-attended attack to assess the social understanding of prediction models.

Semantically-aware Discriminators

We build on successful cGAN models to propose a new semantically-aware discriminator that better guides the generator. We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.

Pedestrian Bounding Box Prediction

A libary for predicting 2D and 3D bounding boxes of humans in autonomous driving scenarios