Early detection of harmful information against humanitarian organisations 

In this project, we aim to develop technical methods to combat information-based attacks against humanitarian organisations on social media. We will uncover how the phenomenon of weaponising information affects humanitarian organisations and determine what we can learn from the technical means that are employed to carry out these attacks in order to prevent future attacks. Technically, this is a challenging problem in the context of humanitarian organisations. The naïve approach would entail deploying hate speech detection or sentiment analysis directly to find attacks. However, these methods will be pervaded with false positives, as humanitarian organisations often work in contexts where negative sentiment can be expected: social media posts about armed conflict or infectious disease outbreaks will be primarily negative, whether they are attacking an organisation or not. Therefore, the challenge is to be able to reliably filter posts that are attacking organisations among the noise of negative posts and mentions.

PREREQUISITES

  • Familiar with Python, PyTorch, HuggingFace librairies
  • Creativity, spirit, initiative and pro-active
  • Knowledge of Linux and related tools

PREFERRED, BUT NOT REQUIRED

  • Experience in Machine Learning
  • Experience in Natural Language Processing

Send me your CV: [email protected]