Decision making

How do we translate sensory evidence into a decision? For example, how do we decide whether a dot rotates clock- or anti-clockwise? Classically, it is assumed that visual information processing is feedforward and evidence about a stimulus is encoded by the firing rate of neurons coding for this stimulus. For decision making, this sensory evidence is directly translated into a race model where evidence drives a decision variable towards one of two (or more) decision boundaries. As soon as a threshold is crossed, a motor command is elicited. Intuition and independent race models assume that the first incoming evidence drives the decision more strongly than later coming ones. We showed however that the opposite is the case.

We presented a vernier stimulus consisting of two vertical bars, which are slightly offset in the horizontal direction either to the left or right. Immediately after this vernier, we presented a second vernier with the opposite offset direction. Because of short presentations times, the verniers fuse, i.e., only one vernier is perceived. Interestingly, the two offsets combine into one perceived offset. Observers cannot tell whether the first or second vernier is offset to right or left (Key Publication: Scharnowski et al., 2009). As mentioned, independent race models predict that the first vernier should determine decisions more strongly than the second one. Hence, when for example the first vernier is offset to the right, a right decision should more likely occur than a left decision. In addition, the longer the presentation times of the two verniers, the stronger the first vernier should dominate. The opposite was the case. When both the first and second vernier were presented for 10ms each, performance was almost equally dominated by both verniers. However, when both verniers are presented for 40ms each, observers decide almost all the times for the offset direction of the second vernier. We proposed that the race model is preceded by a leaky integrator. Computer simulations showed a good match between the experimental and the theoretical data (Rüter et al., 2012).

We identify the buffer with visual information processing itself (Rüter et al., 2012). As mentioned, visual processing is often thought to be feedforward and that this feedforward input is fed into the race model. We propose that visual processing is recurrent and long-lasting and that only after a substantial period of processing its output is fed into the race process. A TMS experiment provides evidence for long-lasting processing including long-lasting memories for individual elements (see Figure 1; Key Publication: Scharnowski et al., 2009). In addition, as with crowding, visual masking, and non-retinotopic processing, grouping, perceptual organization, and attention (Hochmitz et al., 2018) are key. Grouping determines what is subjected to the race process (Key Publication: Hermens et al., 2009).

Figure 1

Figure 1: Effects of TMS on Feature Fusion. First, we adjusted the offset size of the first vernier such that performance was at around 50%, i.e., on average both verniers contributed equally to performance (‘no TMS’; indicated by the dashed line). Next, we applied TMS at different times after the onset of the first vernier (TMS onset asynchrony). For onset asynchronies ranging from 45 to 95 ms, the second vernier dominated performance. For TMS onset asynchronies of more than 145 ms, the first vernier dominated. The surprising result is that TMS has differential effects for up to 370 ms after the onset of the first vernier, even though only one fused vernier is consciously perceived. Error bars indicate 95% confidence interval based on a bootstrap analysis; vernier presentations are indicated by the small depictions in the graph; performance was quantified as the percentage of responses in which the perceived offset direction of the fused vernier corresponded to that of the first vernier. From Scharnowski et al. (2009).

Publications

Race models
Priming
Feature fusion
Manipulating the decision criterion