Perceptual learning

Perceptual learning is learning to see. For example, it needs years of perceptual learning and thousands of presentations of MRI/X-ray scans before radiologists can easily spot a tumour. Usually, perceptual learning is described by neural networks. Each presentation of a stimulus (e.g., MRI-scan) changes synaptic weights in the visual brain according to a learning rule, e.g., Hebbian learning. In these models, learning is fully determined by the sequence of stimuli and the learning rule. In some way, humans are slaves of their experiences.

However, purely experience-dependent learning would lead to learning many behaviourally irrelevant tasks. We showed by combinatorial reasoning that only a tiny fraction of all possible tasks can be learned and that, for this reason, we can only learn what we “want” to learn. A counter-intuitive implication of this “combinatorial learning” is that we all perceive the world differently (Key Publication: Herzog & Esfeld, 2007). In addition, we have shown that perceptual learning can even occur when observers just imagined stimuli ruling out most neural network models of perceptual learning where, as mentioned, only stimulus presentation matters (Key Publications: Tartaglia, Bamert, Mast & Herzog, 2009; see also Tartaglia, Bamert, Herzog & Mast, 2012; Mast, Tartaglia & Herzog, 2012). We are not slaves of stimulus exposure. To the contrary, the conscious mind constitutes itself through perceptual learning.

The role of transfer and roving. Perceptual learning is very specific. For example, observers train to discriminate the horizontal offset direction of two vertical bars (vernier offset discrimination). Performance improves significantly with about 1600 trials. When the bars are rotated by 90 degrees, there is no transfer of learning. Observers need to train again. Interestingly, perceptual learning is specific even for the motor response (Grzeczkowski et al., 2017, 2019; see also Szumska et al., 2016) and can partly occur without consciousness (Galiussi et al., 2018). The lack of transfer is interesting for academic purposes, however, a no-go for practical applications. For example, to counteract the effects of aging on perception, tests are desirable, which transfer across many stimulus dimensions. Interestingly, it seems that with the right amount of training trials, transfer can occur. We found, there is neither learning nor transfer for a few trials per session (160 trials for 10 sessions each). For many trials per session (800 trials for 2 sessions each), there is learning but no transfer. For an intermediate number (400 trials for 4 session each), there is both learning and transfer (Aberg, Tartaglia, & Herzog, 2009). And there is more. It is better to learn different tasks, that are similar, in different sessions than when presented intermingled, i.e., in so-called roving conditions (Aberg & Herzog, 2009; Tartaglia, Aberg & Herzog, 2009). In collaboration with the laboratory of Wulfram Gerstner (EPFL) and Henning Sprekeler (Berlin), we were able to mathematically explain why this is the case (Aberg, Fremaux, Gerstner & Sprekeler, 2012) and make a strong link to reinforcement learning and LTP (Aberg et al., 2012).

Anesthesiology. We investigated whether perceptual learning can occur during anesthesia. The good news is: it cannot. Anesthesia is safe also for implicit learning (Aberg, Albrecht, Tartaglia, Farron, Soom & Herzog, 2009).

Reinforcement Learning (RL). RL is usually investigated with paradigms where each action is followed by immediate reward. We have introduced a paradigm for sequential decision making (Tartaglia et al., 2018) and shown that even non-Markovian tasks can be learned in humans (Clarke et al., 2016) and that there is clear evidence for an eligibility trace in humans (Lehman et al., 2019).

Publications

Philosophy and combinatorics of perceptual learning
Imagery perceptual learning
LTP
Transfer & Roving
Anesthesiology
Modelling
Perceptual learning & Stress
Haptic perceptual learning
Reinforcement learning