The WILDTRACK Seven-Camera HD Dataset

The challenging and realistic setup of the ‘WILDTRACK‘ dataset brings multi-camera detection and tracking methods into the wild.

It meets the need of the deep learning methods for a large-scale multi-camera dataset of walking pedestrians, where the cameras’ fields of view in large part overlap. Being acquired by current high tech hardware it provides HD resolution data. Further, its high precision joint calibration and synchronization shall allow for development of new algorithms that go beyond what is possible with currently available data-sets.


Camera 1: GoPro Hero 3

Camera 2: GoPro Hero 3

Camera 3: GoPro Hero 3

Camera 4: GoPro Hero 3

Camera 5: GoPro Hero 4

Camera 6: GoPro Hero 4

Camera 7: GoPro Hero 4

Download

To download the annotated dataset (frames & annotations):

Wildtrack_dataset_full.zip

To download the videos:

Hardware and data acquisition

This new multi-camera dataset was acquired using seven high-tech statically positioned cameras with overlapping fields of view. Namely, three GoPro Hero 4 and four GoPro Hero 3 cameras were used. It comes with highly accurate joint-camera calibration as well as synchronization between the views’ sequences.

The data acquisition took place in front of the main building of ETH Zurich, Switzerland, during nice weather conditions. The sequences are of resolution 1920×1080 pixels, shot at 60 frames per second.

Description of available files

Currently we provide:

  • Synchronized frames extracted with a frame rate of 10 fps, 1920×1080 resolution, and which are post-processed to remove the distortion;
  • Calibration files which use the Pinhole camera model, compatible with the projection functions provided in the OpenCV library. Both the extrinsic and the intrinsic calibrations are available;
  • The ground-truth annotations in a ‘json’ file format (please see separate section bellow);
  • For ease in usage for methods focusing on classification, we also provide a file we refer to as ‘positions’ file in ‘json’ file format. For details please refer to the section bellow.

Please check for an update of this site, which shell extend the download list with:

  • Full videos;
  • Corresponding points annotations which may be used for camera calibration algorithms;
  • A second part of this dataset which albeit not being annotated, can be used for unsupervised methods.

Positions file

The ‘positions file’ allows for omitting the work with calibration files and focusing for instance on classification, while making use of the fact that the cameras are static. It consists of information about where exactly a given set of particular volumes of space project to in all of the views. The height of each volume space corresponds to the one of an average person’s height.

We discretize the ground surface as a regular grid. The 3D space occupied if a person is standing at a particular position is modelled by a cylinder positioned centrally on the grid point. Each cylinder projects into each of the separate 2D views as a rectangle whose position in the view is given in pixel coordinates.

Using a 480×1440 grid – totalling into 691200 positions – and the provided camera calibration files, we yield such file which is available for download. Each position is assigned an ID using 0-based enumeration ([0, 691199]). The views’ ordering numbers in this file also follow such enumeration, i.e. they range between 0 and 6 inclusively. The positions which are not visible in a given view are assigned coordinates of -1.

Annotations

Full ground truth annotations are provided for 400 frames using a frame rate of 2fps. On average, there are 20 persons on each frame. Thus, our dataset provides approximately 400x20x7=56,000 single-view bounding boxes. By interpolating, the annotations’ size can be further increased. This annotations were generated through workers hired on Amazon Mechanical Turk.

Note that the annotations roughly correspond to the coordinates of the above-elaborated position file and thus include the ID of the annotated position which is estimated to be occupied by the specific target. These position IDs are in accordance with the provided positions file.

Acknowledgment

This work was supported by the Swiss National Science Foundation, under the grant CRSII2-147693 ”WILDTRACK”.

Publication

Field Guide to Northern Tree-related Microhabitats: Descriptions and size limits for their inventory in boreal and hemiboreal forests of Europe and North America

R. Bütler Sauvain; L. Larrieu; L. F. Lunde; M. Maxence; B. Nordén et al. 

Swiss Federal Institute for Forest, Snow and Landscape Research WSL, Switzerland, 2024.

Data Champions Lunch Talks – Green Bytes: Data-Driven Approaches to EPFL Sustainability

M. S. P. Cubero-Castan; M. Peon Quiros; C. Gabella; F. Varrato; Loïc Lannelongue 

Data Champions Lunch Talks – Green Bytes: Data-Driven Approaches to EPFL Sustainability, EPFL, CM 1 221, April 18, 2024.

Comparison of Three Viral Nucleic Acid Preamplification Pipelines for Sewage Viral Metagenomics

X. Fernandez Cassi; T. Kohn 

Food and Environmental Virology. 2024. DOI : 10.1007/s12560-024-09594-3.

How to Support Students to Develop Skills that Promote Sustainability

S. R. Isaac; J. de Lima 

Teaching Transversal Skills for Engineering Studens: A Practical Handbook of Activities with Tangibles; EPFL, 2024.

How to Support Students Giving Each Other Constructive Feedback, Especially When It Is Difficult to Hear

S. R. Isaac; J. de Lima 

Teaching Transversal Skills for Engineering Studens: A Practical Handbook of Activities with Tangibles; EPFL, 2024.

How teachers can use the 3T PLAY trident framework to design an activity that develops transversal skills

S. R. Isaac; J. de Lima 

Teaching Transversal Skills for Engineering Studens: A Practical Handbook of Activities with Tangibles; EPFL, 2024.

The conceptual foundations of innate immunity: Taking stock 30 years later

Pradeu Thomas; Thomma Bart T.P.H.; Girarding Stephen; B. Lemaitre 

Immunity. 2024-04-09. Vol. 57, num. 4, p. 613-631. DOI : 10.1016/j.immuni.2024.03.007.

Radio-Activities: Architecture and Broadcasting in Cold War Berlin

A. Thiermann 

Cambridge, MA; London: MIT Press, 2024.

No Last One

A. Thiermann 

Revue Matières. 2024. num. 18.

All That is Solid

A. Thiermann 

Transcalar Prospects in Climate Crisis; Zurich: Lars Müller, 2024.

Contact