Study on the Impact of Anonymization on Training Data

This project is dedicated to understanding how different anonymization techniques, particularly face blurring, affect the performance of detection models. By systematically blurring faces in training datasets and evaluating the resulting model accuracies, we aim to identify anonymization strategies that preserve the utility of the data while ensuring privacy.

Project Goals

The primary objectives of this project are:

  1. To assess the impact of various face blurring techniques on the accuracy of detection models.
  2. To develop an anonymization strategy that minimizes bias and maintains high model performance.

Prerequisites

Participants in this project should have:

  • Python programming experience
  • Previous experience in machine learning and computer vision (Pytorch).
  • Knowledge about computer vision and camera model is recommended.

Contact

Please send an email to [email protected] if you are interested in this project.