Contrast-modulated motion transfers to luminance-modulated motion, but not the other way around (Petrov & Hayes, 2011)

Perceptual learning was used as a tool for studying motion perception.  The pattern of transfer of learning of luminance-(LM) and contrast-modulated (CM) motion is diagnostic of how their respective processing pathways are integrated.  Twenty observers practiced fine motion direction discrimination with either additive (LM) or multiplicative (CM) mixtures of a dynamic noise carrier and a radially isotropic texture modulator.  Group 1 pre-tested CM for 2 blocks, trained LM for 16 blocks, and post-tested CM for 6 blocks during 6 sessions on separate days.  In Group 2, the LM and CM roles were reversed.  The d’ improved almost twofold in both groups.  There was full transfer of learning from CM to LM but no significant transfer from LM to CM.  The pattern of post-switch improvement was asymmetric as well- no further learning during the LM post-test versus rapid relearning during the CM post-test.

Asymmetric_dataThese strong asymmetries suggest a dual-pathway architecture with Fourier channels sensitive only to LM signals and non-Fourier channels sensitive to both LM and CM.  We hypothesized that the channels tuned for the same motion direction but different carriers were integrated using a MAX operation.


Above is a simplified sketch of a generic dual-pathway model of motion processing.  It shows 3 channels tuned for different directions of motion as indicated by the arrows.  The second-order circuits are highlighted in gray.  Within each motion direction, there are multiple channels tuned for different spatial frequencies.  The input layer (a) is processed by a bank of early spatial filters (layer b).  The first-order pathway routes the filtered signal directly to the motion extractors (ME) in layer (e).  The second-order pathway includes filter-rectify-filter cascade in layers b-d.  We propose that the two pathways are combined via a MAX operation (layer f) to achieve cue invariance.  The information is then integrated across motion directions in layer (g), and a discrimination decision is made in layer (h).