How do we translate sensory evidence into a decision? For example, how do we decide whether a dot rotates clock- or anti-clockwise? Classically, it is assumed that visual information processing is feedforward and evidence about a stimulus is encoded by the firing rate of neurons coding for this stimulus. For decision making, this sensory evidence is directly translated into a race model where evidence drives a decision variable towards one of two (or more) decision boundaries. As soon as a threshold is crossed, a motor command is elicited. Intuition and independent race models assume that the first incoming evidence drives the decision more strongly than later coming ones. We showed however that the opposite is the case.
We presented a vernier stimulus consisting of two vertical bars, which are slightly offset in the horizontal direction either to the left or right. Immediately after this vernier, we presented a second vernier with the opposite offset direction. Because of short presentations times, the verniers fuse, i.e., only one vernier is perceived. Interestingly, the two offsets combine into one perceived offset. Observers cannot tell whether the first or second vernier is offset to right or left (Key Publication: Scharnowski et al., 2009). As mentioned, independent race models predict that the first vernier should determine decisions more strongly than the second one. Hence, when for example the first vernier is offset to the right, a right decision should more likely occur than a left decision. In addition, the longer the presentation times of the two verniers, the stronger the first vernier should dominate. The opposite was the case. When both the first and second vernier were presented for 10ms each, performance was almost equally dominated by both verniers. However, when both verniers are presented for 40ms each, observers decide almost all the times for the offset direction of the second vernier. We proposed that the race model is preceded by a leaky integrator. Computer simulations showed a good match between the experimental and the theoretical data (Rüter et al., 2012).
We identify the buffer with visual information processing itself (Rüter et al., 2012). As mentioned, visual processing is often thought to be feedforward and that this feedforward input is fed into the race model. We propose that visual processing is recurrent and long-lasting and that only after a substantial period of processing its output is fed into the race process. A TMS experiment provides evidence for long-lasting processing including long-lasting memories for individual elements (see Figure 1; Key Publication: Scharnowski et al., 2009). In addition, as with crowding, visual masking, and non-retinotopic processing, grouping, perceptual organization, and attention (Hochmitz et al., 2018) are key. Grouping determines what is subjected to the race process (Key Publication: Hermens et al., 2009).
Figure 1: Effects of TMS on Feature Fusion. First, we adjusted the offset size of the first vernier such that performance was at around 50%, i.e., on average both verniers contributed equally to performance (‘no TMS’; indicated by the dashed line). Next, we applied TMS at different times after the onset of the first vernier (TMS onset asynchrony). For onset asynchronies ranging from 45 to 95 ms, the second vernier dominated performance. For TMS onset asynchronies of more than 145 ms, the first vernier dominated. The surprising result is that TMS has differential effects for up to 370 ms after the onset of the first vernier, even though only one fused vernier is consciously perceived. Error bars indicate 95% confidence interval based on a bootstrap analysis; vernier presentations are indicated by the small depictions in the graph; performance was quantified as the percentage of responses in which the perceived offset direction of the fused vernier corresponded to that of the first vernier. From Scharnowski et al. (2009).
- Rüter J, Sprekeler H, Gerstner W, Herzog MH (2013). The Silent Period of Evidence Integration in Fast Decision Making. PLoS ONE, 8(1), e46525. [⇒ pdf]
- Rüter J, Marcille N, Sprekeler H, Gerstner W, Herzog MH (2012). Paradoxical Evidence Integration in Rapid Decision Processes. PLoS Computational Biology, 8(2), e1002382.
- Rüter J, Kammer T, Herzog MH (2010). When transcranial magnetic stimulation (TMS) modulates feature integration. European Journal of Neuroscience, 32(11), 1951–1958.
- Scharnowski F, Rüter J, Jolij J, Hermens F, Kammer T, Herzog MH (2009). Long-lasting modulation of feature integration by transcranial magnetic stimulation. Journal of Vision, 9(6):1, p1-10.
- Grainger JE, Scharnowski F, Schmidt T, Herzog MH (2013). Two primes priming: Does feature integration occur before response activation? Journal of Vision, 13(8):19, p1-10.
- Hochmitz I, Lauffs MM, Herzog MH, Yeshurun Y (2018). Sustained spatial attention can affect feature fusion. Journal of Vision, 18(6):20, p1-14.
- Pilz KS, Zimmermann C, Scholz J, Herzog MH (2013). Long-lasting visual integration of form, motion, and color as revealed by visual masking. Journal of Vision, 13(10):12, p1-11.
- Hermens F, Scharnowski F, Herzog MH (2009). Spatial grouping determines temporal integration. Journal of Experimental Psychology. Human Perception and Performance, 35(3), p595-610.
- Herzog MH, Scharnowski F, Hermens F (2007). Long lasting effects of unmasking in a feature fusion paradigm. Psychological Research, 71(6), p653-8.
- Scharnowski F, Hermens F, Kammer T, Öğmen H, Herzog MH (2007). Feature fusion reveals slow and fast visual memories. Journal of Cognitive Neuroscience, 19(4), p632-41.
- Scharnowski F, Hermens F, Herzog MH (2007). Bloch’s law and the dynamics of feature fusion. Vision Research, 47(18), p2444-52.
- Herzog MH, Lesemann E, Eurich CW (2006). Spatial interactions determine temporal feature integration as revealed by unmasking. Advances in Cognitive Psychology, 2(1), p77-85.
- Aberg KC, Herzog MH (2012). Different types of feedback change decision criterion and sensitivity differently in perceptual learning. Journal of Vision, 12(3), p1-11.
- Herzog MH, Ewald KRF, Hermens F, Fahle M (2006). Reverse feedback induces position and orientation specific changes. Vision Research, 46(22), p3761-70.