Interpretable environmental AI

In prototypical part learning, the model selects real patches seen during training as prototypes and constructs the dense prediction map based on the similarity between parts of the test image and the prototypes. Here, ScaleProtoSeg provides the interpretation of a segmentation through the analysis of groups of prototypes. For the class car, groups correspond to the bottom part, the main part or the upper part of the car. From Porta et al., WACV 2025.

Sometimes we want more than accurately predicting. Generating insights on how and why the model comes to its own decision is valuable information that can be used to build better algorithms, detect spurious decisions, and generate valuable insights for domain specialists. 

For example, while it is valuable to be accurate in detecting a species of interest, it would be very interesting to know which visual attributes of the animal have led to this decision, or which interactions with the environment have been observed. Such information can help confirm the veracity of the decision, while teaching something about species interactions, such as predator/prey.

Being able to interpret and explain model decisions can increase the value of remote sensing image processing systems.

Through semantic interpretable models based on neural networks’ inner representations, we study how to generate intermediate explanations that are meaningful for users. They are indeed domain-specific and can be helpful to generate insightful cues for decision makers, especially when dealing with critical applications such as forest wildfire management.

Papers

  • H. Porta, E. Dalsasso, D. Marcos, and D. Tuia. Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 2869-2880. IEEE, 2025 (paper, github).
  • T.A. Nguyen, B. Kellenberger, and D. Tuia. Mapping forest in the Swiss Alps treeline ecotone with explainable deep learning. Remote Sensing of Environment281, p.113217, 2022 (paper, github).
  • A. Levering, D. Marcos, and D. Tuia. On the relation between landscape beauty and land cover: a case study in the U.K. at Sentinel-2 resolution with interpretable AI. ISPRS J. Int. Soc. Photo. Remote Sens., 177:194–203, 2021 (paper).
  • I. Havinga, D. Marcos, P. Bogaart, L. Hein, and D. Tuia. Computer vision and social media data capture the aesthetic quality of the landscape for ecosystem service assessments. Scientific Reports, 11:20000, 2021 (paper).
  • D. Marcos, S. Lobry, R. Fong, N. Courty, R. Flamary, and D. Tuia. Contextual semantic interpretability. In Asian Conference on Computer Vision (ACCV), Kyoto, Kapan, 2020 (paper).
  • P. Arendsen, D. Marcos, and D. Tuia. Concept discovery for the interpretation of landscape scenicness. Mach. Learn. Knowledge Extraction, 2(4):397–413, 2020 (paper).

Interpretable AI neural network architecture. Humanly interpretable concepts (scene attributes, objects, etc.) are predicted before being combined into the estimation of the beauty of a landscape (From Marcos et al., ACCV 2020)