摘要

W.K. Estes often championed an approach to model development whereby an existing model was augmented by the addition of one or more free parameters to account for additional psychological mechanisms. Following this same approach we utilized Estes' (1950) own augmented learning equations to improve the plausibility of a win-stay-lose-shift (WSLS) model that we have used in much of our recent work. We also improved the plausibility of a basic reinforcement-learning (RL) model by augmenting its assumptions. Estes also championed models that assumed a comparison between multiple concurrent cognitive processes. In line with this, we develop a WSLS-RL model that assumes that people have tendencies to stay with the same option or switch to a different option following trials with relatively good ("win") or bad ("lose") outcomes, and that the tendencies to stay or shift are adjusted based on the relative expected value of each option. Comparisons of simulations of the WSLS-RL model with data from three different decision-making experiments suggest that the WSLS-RL provides a good account of decision-making behavior. Our results also support the assertion that human participants weigh both the overall valence of the previous trial's outcome and the relative value of each option during decision-making.

  • 出版日期2014-4