Interpretable environmental AI

Concept for an Interpretable AI neural network outcome. Given a photo, humanly interpretable concepts (man-made, flowers, etc.) are predicted before being combined into the estimation of beauty of a landscape. The combination into groups (colored lines) and contributions (righmost scores) allows to understand how these elements influence the perception of beauty (From Marcos et al., ACCV 2020).

Sometimes we want more than predicting accurately. Generating insights on how and why the model comes to its own decision is valuable information that can be used to build better algorithm, detect spurious decisions and generate valuable insights for domain specialists. 

For example, while it is valuable to be accurate in detecting a species of interest, it would be very interest to know which visual attributes of the animal have led to this decision, or which interactions with the environment have been observed. Such information can help confirming the veracity of the decision, while teaching something about species interactions such as predator/prey.

Being able to interpret and explain model decisions can increase the value of remote sensing image processing systems.

Through semantic interpretable models based on neural networks inner representaitons, we study how to generate intermediate explanations that are meaningful for users. They are indeed domain specific and very helpful to generate insighful clues, especially when dealing with highly subjective topics such as landscape aesthetics or ecosystem services assessments.

Papers

 

  • A. Levering, D. Marcos, and D. Tuia. On the relation between landscape beauty and land cover: a case study in the U.K. at Sentinel-2 resolution with interpretable AI. ISPRS J. Int. Soc. Photo. Remote Sens., 177:194–203, 2021 (paper, infoscience).
  • I. Havinga, D. Marcos, P. Bogaart, L. Hein, and D. Tuia. Computer vision and social media data capture the aesthetic quality of the landscape for ecosystem service assessments. Scientific Reports, 11:20000, 2021 (paper, infoscience).
  • D. Marcos, S. Lobry, R. Fong, N. Courty, R. Flamary, and D. Tuia. Contextual semantic interpretability. In Asian Conference on Computer Vision (ACCV), Kyoto, Kapan, 2020 (paper).
  • P. Arendsen, D. Marcos, and D. Tuia. Concept discovery for the interpretation of landscape scenicness. Mach. Learn. Knowledge Extraction, 2(4):397–413, 2020 (paper).

Interpretable AI neural network architecture. Humanly interpretable concepts (scene attributes, objects, etc.) are predicted before being combined into the estimation of beauty of a landscape (From Marcos et al., ACCV 2020)