Promptable Human Trajectory Prediction

Saeed Saadatnejad*, Yang Gao*, Kaouther Messaoud, Alexandre Alahi

International Conference on Learning Representations (ICLR), 2024

Accurate human trajectory prediction is crucial for applications such as autonomous vehicles, robotics, and surveillance systems. Yet, existing models often fail to fully leverage the non-verbal social cues human subconsciously communicate when navigating the space. To address this, we introduce Social-Transmotion, a generic model that exploits the power of transformers to handle diverse and numerous visual cues, capturing the multi-modal nature of human behavior. We translate the idea of a prompt from Natural Language Processing (NLP) to the task of human trajectory prediction, where a prompt can be a sequence of x-y coordinates on the ground, bounding boxes or body poses. This, in turn, augments trajectory data, leading to enhanced human trajectory prediction. Our model exhibits flexibility and adaptability by capturing spatiotemporal interactions between pedestrians based on the available visual cues, whether they are poses, bounding boxes, or a combination thereof. By the masking technique, we ensure our model’s effectiveness even when certain visual cues are unavailable, although performance is further boosted with the presence of comprehensive visual data.

[Code] [Paper] [Poster]

Social-Transmotion: Promptable Human Trajectory Prediction

S. Saadatnejad; Y. Gao; K. Messaoud Ben Amor; A. Alahi 

2024-05-01. International Conference on Learning Representations (ICLR), Vienna, Austria, May 7-11, 2024.