Do we really become smarter when our fluid-intelligence scores improve?

Raven and other matrix-reasoning tasks are widely used to assess the efficacy of cognitive training regimens.

Recent reports of training-induced gains on fluid intelligence tests have fueled an explosion of interest in cognitive training- now a billion-dollar industry.  The interpretation of these results is questionable because the score gains can be dominated by factors that play marginal roles in the scores themselves, and because intelligence gain is not the only possible explanation for the observed control-adjusted far transfer across tasks.

Using SRSA to extract differences in scanpath patterns between pre and post-test revealed participants were refining their strategies.  Strategy refinement explained one third of the variance in score gains.

Test score gains typically used to measure the efficacy of cognitive training may reflect strategy refinement instead of intelligence gains (Hayes, Petrov, & Sederberg, 2015).  Using Successor Representation Scanpath Analysis (SRSA) of eye movement data from 35 participants solving Raven’s Advanced Progressive Matrices on two separate sessions indicated that one-third of the variance of score gains could be attributed to test-taking strategy alone, as revealed by characteristic changes in eye-fixation patterns.  When the strategic contaminant was partialled out, the residual score gains were no longer significant.  These results are compatible with established theories of skill acquisition suggesting procedural knowledge tacitly acquired during training can later be utilized at posttest.

SRSA prediction weights for low-improvement and high-improvement (right) participants.  A comparison of the prediction weights shows markedly more diffuse scanning in the low-improvement group and a gain in systematicity in the high-improvement group that drives practice effects.