Sometimes we want more than predicting accurately. Generating insights on how and why the model comes to its own decision is valuable information that can be used to build better algorithm, detect spurious decisions and generate valuable insights for domain specialists.
For example, while it is valuable to be accurate in detecting a species of interest, it would be very interest to know which visual attributes of the animal have led to this decision, or which interactions with the environment have been observed. Such information can help confirming the veracity of the decision, while teaching something about species interactions such as predator/prey.
Being able to interpret and explain model decisions can increase the value of remote sensing image processing systems.
Through semantic interpretable models based on neural networks inner representaitons, we study how to generate intermediate explanations that are meaningful for users. They are indeed domain specific and very helpful to generate insighful clues, especially when dealing with highly subjective topics such as landscape aesthetics or ecosystem services assessments.
Papers
- A. Levering, D. Marcos, and D. Tuia. On the relation between landscape beauty and land cover: a case study in the U.K. at Sentinel-2 resolution with interpretable AI. ISPRS J. Int. Soc. Photo. Remote Sens., 177:194–203, 2021 (paper, infoscience).
- I. Havinga, D. Marcos, P. Bogaart, L. Hein, and D. Tuia. Computer vision and social media data capture the aesthetic quality of the landscape for ecosystem service assessments. Scientific Reports, 11:20000, 2021 (paper, infoscience).
- D. Marcos, S. Lobry, R. Fong, N. Courty, R. Flamary, and D. Tuia. Contextual semantic interpretability. In Asian Conference on Computer Vision (ACCV), Kyoto, Kapan, 2020 (paper).
-
P. Arendsen, D. Marcos, and D. Tuia. Concept discovery for the interpretation of landscape scenicness. Mach. Learn. Knowledge Extraction, 2(4):397–413, 2020 (paper).