AI Research Aiding Online Children Safety

Over 1.5 year starting in 2016, LSIR has worked with Privately SA through a CTI project (former name of Innosuisse projects). We have developed Machine Learning classifiers and tools to help with the detection of specific risks and threats children are facing while using their newly acquired phones.

Thanks to that collaboration, we have been able to build more than ten classifiers performing on par or better than what exists on the market and to build tools to support the update of our existing classifiers and the creation of new classifiers.

Those classifiers have been later adapted to make them work smoothly on a mobile phone. This means that we are now in a position to run all the content analysis on the mobile phone without having to send any content to any server.

Selected demonstrators

Hate Analysis in Text

Our hate speech detector uses deep learning in order to spot obscene or toxic content in conversations. We have trained language models (a.k.a. word embeddings) on 2.3 Billion tweets, obtaining representations for both words and sub-words. Our hate speech classifiers use both the words and the sub-words to deal with slang language, going around spelling mistakes and word variations.

Incident Detection in Images

We have built a set of image classifiers that are able to spot various incident on social media (provocative images, violent images, private family images, gore content, etc. We designed those classifiers to work in the wild through a hierarchical approach that filters out noise.

Emotion Recognition in Text and Images

We have built a novel technique, called “Common Space Fusion” for jointly classifying social media posts by jointly analyzing the visual and the textual content.