An AI assistant to interact with remote sensing images

Large quantities of remote sensing data is being continuously produced from satellites. While part of this data is openly accessible to the public through various portals, it still requires good technical skills to extract useful information from it. In numerous applications, such as environmental monitoring, agriculture, urban planning or tourism for example, this requirement represents a barrier for the wider use of satellite imagery. 

With this project, we aim at developing an alternative way to extract information from satellite imagery using natural language. Available to a larger audience without the need for demanding technical skills, the user would interact with the image(s) through questions and answers. The envision method would be able to understand a question in English about the image, retrieve the answer from the image and provide it in English to the user. 

Natural language processing has seen much research, and promising results in the last decade. We leverage this cutting-edge technology in the context of remote sensing visual question answering. Sensitive about the changes occuring on the planet, we wish to make satellite data more useful for the wider audience, enabling people to monitor the state of our planet through their own curiosity, with their own questions. Finally, we try to consider strategies with as much transparency and interpretability as possible for the user to understand the given answer.

Partners

Funding

  • Open Space Innovation Platform (OSIP), European Space Agency