Face analysis for automotive applications

EPFL’s Transportation Center has initiated a long-term research agenda to improve driver’s safety and comfort in cars through non-intrusive vision-based human-machine interfaces.

The main goal of this research program is to extract from driver’s face as much relevant information as possible while accounting for the constraint to only make use of non-invasive means. Several face analysis techniques already exist (facial expression recognition, eye tracking, lips reading…) but they cannot be straightforwardly applied to the automotive context. Indeed, restricted space, changing light conditions, driver’s personality and safety requirements are some of the challenges that impose to test, validate and adapt existing or future techniques for cars.

This scientific collaboration has led to several research projects since 2012:

  • Physiological measures based on imaging camera (2015)

This project aims to demonstrate performance measurement of heart and respiratory rates by camera under infrared illumination in automotive environment.

Automakers seek to know the status of the driver of the vehicle both for reasons of security and convenience. This project aims to determine his physiological state, by measuring his heart and respiratory rates, in a non-invasive way. For this, it uses the video images of the illuminated face under infrared.

Unlike conventional techniques for measuring physiological state, this method has the advantage of operating without contact. It is therefore not subject to the artifacts due to bad contacts, as can techniques with physiological sensors. It seems very well suited to this application in the automotive interior.

This project aims at validating and quantifying the technical robustness to changes in conditions such as the ones we can meet in an automobile interior (changes in ambient brightness, movements of the conductor facing the camera).

This three-month project is led by the Signal Processing Laboratory (LTS5) of professor Jean-Philippe Thiran in collaboration with the Applied Signal Processing Group (ASPG) of Dr. Jean-Marc Vesin. It is sponsored by Groupe PSA.

  • Driver’s state monitoring by face analysis (2014-2015)

This project aims to define and to detect the relevant emotional states in order to estimate the physiological state of the driver.

During the last decade, the Signal Processing Laboratory 5 (LTS5) has developed a strong expertise on face analysis. This project aims to estimate the emotional state of the driver through analysis of facial expressions. The first step will consist on providing a definition of the emotions and relevant states for the driver-car interface. The second stage will be the development of a real time emotion detector system. The last part will be the specific detection of the driver’s attention and distraction based on facial information.

This project lasts 24 months and is carried out by the LTS5. It is sponsored by Groupe PSA and Valeo.

  • Audiovisual speech recognition (2013-2014)

This project aims to demonstrate that visual information can significantly improve speech recognition in a realistic automotive context.

For human-machine interfaces, the face is central. It is recognized as a leading natural conveyor of information, as well as gestures or voice. The purpose of this project is to demonstrate that the video images of the face of a driver can significantly improve the performance of automatic speech recognition systems inside the car.

Previous research work, realized by the LTS5, have shown that visual information improves performance of automatic speech recognition when the audio channel is noisy, which is common in a car (engine noise, wind noise, exterior or interior noise, etc.). This project implements these achievements in a car. Using a vehicle provided by the French car manufacturer, researchers will create an audiovisual database to test and quantify the performance of audio and audiovisual speech recognition. These data will be incorporated into tools for face detection and tracking available through the partnership between Groupe PSA and LTS5.

This project lasts ten months and is conducted by the LTS5 and is sponsored by Groupe PSA.

  • Emotion detector to improve safety (2012-2013)

This project aims to develop a non-intrusive driver monitoring system which detects emotion state of the driver.

Monitoring the emotional status of the driver is critical for the safety and comfort of driving. In this project a real-time non-intrusive monitoring system is developed, which detects the emotional states of the driver by analyzing facial expressions. The system considers two basic distressed emotions, anger and disgust, as stress related emotions.

We detect an individual emotion in each video frame and the decision on the stress level is made on sequence level. Experimental results show that the developed system operates very well on simulated data even with generic models. An additional adaptation step may improve the current system further.

This project lasts one year and is led by the LTS5. It is sponsored by Groupe PSA.

  • A fatigue detector to keep eyes on the road (2013)

This project aims to develop a computer vision algorithm able to estimate the fatigue level of a driver based on the degree of eyelid closure.

In the context of intelligent cars assisting the driver, fatigue detection plays an important role. Estimates state that up to 30% of car crashes in highways may be related to driver sleepiness. Therefore, a system able to detect drowsiness and to warn the driver could help to reduce accidents significantly.

An intuitive way to measure fatigue involves the use of a camera and the detection of fatigue-related symptoms. The PERCLOS measure which represents the percentage of time the eye is closed within a certain time window shows a significant correlation to drowsiness and is relatively easy to compute.

This project focuses on the detection of the eye closure with the aim to subsequently obtain the PERCLOS as a rate of fatigue. The research develops an eye-analyzing module by creating an algorithm able to disregard possible light effects as well as the drivers’ different eye morphologies. A 3D profiling of the eye and eyelids is then established so as to distinguish an open eye from a closed one. Lastly, the methodology is optimized to make it possible for it to run in on-board vehicle computers with limited computing power in real-time conditions.

This Master project is carried out at the LTS5. It is sponsored by Groupe PSA.

  • Face tracking prototyping platform (2012-2013)

This project develops, tests and validates new techniques for robust face detection, face analysis and eye tracking on a prototyping platform for automotive human-machine interfaces.

This research project aims to set up a development platform, specifically dedicated to real-time video-based face analysis and eye tracking, on the P2S prototyping platform (plate-forme de prototypage et simulation) provided by Groupe PSA.

This project investigates several key aspects. Firstly, the algorithms for face tracking are transferred on the P2S platform and their performances evaluated in terms of computing power and robustness for automotive applications. Then, according to these performances, the scientists integrate more specific material, like GPU cards, in order to achieve the performances required for real-time processing. In a third phase, they adapt face detection and tracking to night conditions with an infrared illumination and propose methods that are robust to variables lighting conditions and to occlusions.
In a fourth phase, they investigate the possibility to improve the accuracy of proposed methods by making use of two HD cameras instead of one, while maintaining the real-time execution on the P2S platform. Lastly, they identify automotive applications of these algorithms with PSA and demonstrate them on a demonstration mock-up.

This project lasts one year and conducted by the LTS5 and is sponsored by Groupe PSA.