摘要

Objective: Patients with schizophrenia (SZ) show reinforcement learning impairments related to both the gradual/procedural acquisition of reward contingencies, and the ability to use trial-to-trial feedback to make rapid behavioral adjustments. Method: We used neurocomputational modeling to develop plausible mechanistic hypotheses explaining reinforcement learning impairments in individuals with SZ. We tested the model with a novel Go/NoGo learning task in which subjects had to learn to respond or withhold responses when presented with different stimuli associated with different probabilities of gains or losses in points. We analyzed data from 34 patients and 23 matched controls, characterizing positive- and negative-feedback-driven learning in both a training phase and a test phase. Results: Consistent with simulations from a computational model of aberrant dopamine input to the basal ganglia patients, patients with SZ showed an overall increased rate of responding in the training phase, together with reduced response-time acceleration to frequently rewarded stimuli across training blocks, and a reduced relative preference for frequently rewarded training stimuli in the test phase. Patients did not differ from controls on measures of procedural negative-feedback-driven learning, although patients with SZ exhibited deficits in trial-to-trial adjustments to negative feedback, with these measures correlating with negative symptom severity. Conclusions: These findings support the hypothesis that patients with SZ have a deficit in procedural "Go" learning, linked to abnormalities in DA transmission at D1-type receptors, despite a "Go bias" (increased response rate), potentially related to excessive tonic dopamine. Deficits in trial-to-trial reinforcement learning were limited to a subset of patients with SZ with severe negative symptoms, putatively stemming from prefrontal cortical dysfunction.

  • 出版日期2011-1