Perceptual learning is learning to see. For example, it needs years of perceptual learning and thousands of presentations of MRI/X-ray scans before radiologists can easily spot a tumour. Usually, perceptual learning is described by neural networks. Each presentation of a stimulus (e.g., MRI-scan) changes synaptic weights in the visual brain according to a learning rule, e.g., Hebbian learning. In these models, learning is fully determined by the sequence of stimuli and the learning rule. In some way, humans are slaves of their experiences.
However, purely experience-dependent learning would lead to learning many behaviourally irrelevant tasks. We showed by combinatorial reasoning that only a tiny fraction of all possible tasks can be learned and that, for this reason, we can only learn what we “want” to learn. A counter-intuitive implication of this “combinatorial learning” is that we all perceive the world differently (Key Publication: Herzog & Esfeld, 2007). In addition, we have shown that perceptual learning can even occur when observers just imagined stimuli ruling out most neural network models of perceptual learning where, as mentioned, only stimulus presentation matters (Key Publications: Tartaglia, Bamert, Mast & Herzog, 2009; see also Tartaglia, Bamert, Herzog & Mast, 2012; Mast, Tartaglia & Herzog, 2012). We are not slaves of stimulus exposure. To the contrary, the conscious mind constitutes itself through perceptual learning.
The role of transfer and roving. Perceptual learning is very specific. For example, observers train to discriminate the horizontal offset direction of two vertical bars (vernier offset discrimination). Performance improves significantly with about 1600 trials. When the bars are rotated by 90 degrees, there is no transfer of learning. Observers need to train again. Interestingly, perceptual learning is specific even for the motor response (Grzeczkowski et al., 2017, 2019; see also Szumska et al., 2016) and can partly occur without consciousness (Galiussi et al., 2018). The lack of transfer is interesting for academic purposes, however, a no-go for practical applications. For example, to counteract the effects of aging on perception, tests are desirable, which transfer across many stimulus dimensions. Interestingly, it seems that with the right amount of training trials, transfer can occur. We found, there is neither learning nor transfer for a few trials per session (160 trials for 10 sessions each). For many trials per session (800 trials for 2 sessions each), there is learning but no transfer. For an intermediate number (400 trials for 4 session each), there is both learning and transfer (Aberg, Tartaglia, & Herzog, 2009). And there is more. It is better to learn different tasks, that are similar, in different sessions than when presented intermingled, i.e., in so-called roving conditions (Aberg & Herzog, 2009; Tartaglia, Aberg & Herzog, 2009). In collaboration with the laboratory of Wulfram Gerstner (EPFL) and Henning Sprekeler (Berlin), we were able to mathematically explain why this is the case (Aberg, Fremaux, Gerstner & Sprekeler, 2012) and make a strong link to reinforcement learning and LTP (Aberg et al., 2012).
Anesthesiology. We investigated whether perceptual learning can occur during anesthesia. The good news is: it cannot. Anesthesia is safe also for implicit learning (Aberg, Albrecht, Tartaglia, Farron, Soom & Herzog, 2009).
Reinforcement Learning (RL). RL is usually investigated with paradigms where each action is followed by immediate reward. We have introduced a paradigm for sequential decision making (Tartaglia et al., 2018) and shown that even non-Markovian tasks can be learned in humans (Clarke et al., 2016) and that there is clear evidence for an eligibility trace in humans (Lehman et al., 2019).
- Herzog MH, Esfeld M (2009). How the mind constitutes itself through perceptual learning. Learning & Perception, 1(1), p147-154.
- Tartaglia EM, Aberg KC, Herzog MH (2009). Modeling perceptual learning: why mice do not play backgammon. Learning & Perception, 1(1), p155-163.
- Grzeczkowski L, Tartaglia E, Mast FW, Herzog MH (2015). Linking perceptual learning with identical stimuli to imagery perceptual learning. Journal of Vision, 15(10), p1-8. [⇒ pdf]
- Tartaglia EM, Bamert L, Herzog MH, Mast FW (2012). Perceptual learning of motion discrimination by mental imagery. Journal of Vision, 12(6):14, p1-10.
- Mast FW, Tartaglia EM, Herzog MH (2012). New Percepts via Mental Imagery? Frontiers in Psychology, 3, p360.
- Tartaglia EM, Bamert L, Mast FW, Herzog MH (2009). Human perceptual learning by mental imagery. Current Biology, 19(24), p2081-5.
- Aberg KC, Herzog MH (2012). About similar characteristics of visual perceptual learning and LTP. Vision Research, 61, p100-106.
- Grzeczkowski L, Cretenoud AF, Mast F, Herzog MH (2019). Motor response specificity in perceptual learning and its release by double training. Journal of Vision, 19(6):4, p1-14.
- Galliussi J, Grzeczkowski L, Gerbino W, Herzog MH, Bernardis P (2018). Is lack of attention necessary for task-irrelevant perceptual learning? Vision Research, 152, p118-125.
- Grzeczkowski L, Cretenoud A, Herzog MH, Mast FW (2017). Perceptual learning is specific beyond vision and decision making. Journal of Vision, 17(6):6, p1-11.
- Szumska I, van der Lubbe R, Grzeczkowski L, Herzog MH (2016). Does sensitivity in binary choice tasks depend on response modality? Consciousness and Cognition, 43, p57-65.
- Herzog MH, Aberg KC, Frémaux N, Gerstner W, Sprekeler H (2012). Perceptual learning, roving and the unsupervised bias. Vision Research, 61, p95-99. [⇒ pdf]
- Aberg KC, Herzog MH (2010). Does perceptual learning suffer from retrograde interference? PLoS ONE, 5(12), e14161.
- Spang K, Grimsen C, Herzog MH, Fahle M (2010). Orientation specificity of learning vernier discriminations. Vision Research, 50(4), p479-485.
- Aberg KC, Tartaglia EM, Herzog MH (2009). Perceptual learning with Chevrons requires a minimal number of trials, transfers to untrained directions, but does not require sleep. Vision Research, 49(16), p2087-94.
- Aberg KC, Herzog MH (2009). Interleaving bisection stimuli – randomly or in sequence – does not disrupt perceptual learning, it just makes it more difficult. Vision Research, 49(21), p2591-8.
- Tartaglia EM, Aberg KC, Herzog MH (2009). Perceptual learning and roving: Stimulus types and overlapping neural populations. Vision Research, 49(11), p1420-7
- Parkosadze K, Otto TU, Malania M, Kezeli A, Herzog MH (2008). Perceptual learning of bisection stimuli under roving: slow and largely specific. Journal of Vision, 8(1):5, p1-8.
- Aberg KC, Albrecht E, Tartaglia EM, Farron A, Soom P, Herzog MH (2009). Anesthesia prevents auditory perceptual learning. Anesthesiology, 111(5), p1010-5.
- Otto TU, Herzog MH, Fahle M, Zhaoping L (2006). Perceptual learning with spatial uncertainties.Vision Research, 46(19), p3223-33.
- Aberg KC, Clarke AM, Sandi C, Herzog MH (2012). Trait anxiety and post-learning stress do not affect perceptual learning. Neurobiol Learn Mem, 98(3), p246-53. [⇒ pdf]
- Hartmann M, Furrer S, Herzog MH, Merfeld DM, Mast FW (2013). Self-motion perception training: thresholds improve in the light but not in the dark. Experimental Brain Research, 226(2), p231-240.
- Lehmann MP, Xu HA, Liakoni V, Herzog MH, Gerstner W, Preuschoff K (2019). One-shot learning and behavioral eligibility traces in sequential decision making. eLife, 8:e47463, p1-25.
- Tartaglia EM, Clarke AM, Herzog MH (2017). What to Choose Next? A Paradigm for Testing Human Sequential Decision Making. Frontiers in Psychology, 8:312, p1-11.
- Clarke AM, Friedrich J, Tartaglia EM, Marchesotti S, Senn W, Herzog MH (2015). Human and Machine Learning in Non-Markovian Decision Making. PLoS ONE, 10(4), e0123105. [⇒ pdf]