Statistical learning under implicit and explicit learning conditions: Behavioral and electrophysiological evidence
Ana Paula Soares1, Helena Oliveira1, Francisco Gutierrez1, Margarida Vasconcelos2, David Tomé3, & Luis Jiménez4
1 Human Cognition Lab, CIPsi, University of Minho, Portugal.
2 Psychological Neuroscience Lab, CIPsi, University of Minho, Portugal.
3 Polytechnic Institute of Porto, Portugal.
2 Department of Psychology, University of Santiago de Compostela, Spain.
Statistical learning (SL), the process of extracting regularities from the environment that surrounds us, plays an essential role in many aspects of cognition, including speech segmentation and language acquisition (see Romberg & Saffran, 2010 or Saffran & Kirkham, 2018 for recent reviews). It is assumed to occur automatically, through passive exposure (e.g., Saffran, Newport, Aslin, Tunick, & Barrueco, 1997), and to enhance processing as it allows the brain to predict and prepare for the incoming input(Batterink, Reber, & Paller, 2015). Yet, the neural mechanisms and the behavioral outcomes underpinning SL under implicit vs. explicit learning conditions are still unclear. Here we present a study conducted to test functional differences of SL under implicit and explicit learning conditions in a within-subject design using an auditory triplet embedded task modeled from Saffran, Aslin, and Newport (1996), but in which the to be learned syllable sequences embedded in the input stream present different levels of transitional probabilities (TPs). To that purpose, undergraduate students from the University of Minho firstly performed the implicit version of the auditory SL task (aSLimpl) in which the underlying regularities of eight three-syllable nonsense-words (half of which with a mean TP = 1 – referred as easy nonsense-words; and the other half with a mean TP = 0.33 – referred as hard nonsense-words) had to be abstracted through exposure. Subsequently, they performed the explicit version of the same task (aSLexpl), in which the underlying regularities of other four-easy and four-hard three-syllable nonsense-words drawn from another set of syllables were explicitly taught. In both versions, participants were exposed to a ~7-minute continuous stream of the eight nonsense-words (e.g., dotigetucidabupepotidomimodegomigedogemitibibaca…) repeated 60 times in two blocks of 30 repetitions each presented in a pseudorandom order, although, in the aSLexpl, participants were made aware of the nonsense-words before they listened to the stream in which the eight nonsense-words were embedded (i.e., that for example dotigeand tucidawere ‘words’ in that language). Event-Related Potentials (ERPs) were recorded during the familiarization phases of each of the tasks, allowing a continuous recording of learning processing in the brain. Following the familiarization phase of each of the tasks, participants performed a two-forced-choice (2-AFC) task in which they were asked to decide as soon and as accurately as possible which of the two auditory presented stimuli (a nonsense word and a nonsense foil) sounded more familiar based on the stream presented during the familiarization phase to assess SL explicitly. Grand-averaged ERPs in the explicit version of the task showed an N400 component to syllables occurring in later (final) and more predictable positions compared to initial (first) and low predictable positions, indicative of ‘word’ segmentation in the brain (e.g., Abla, Katahira, & Okanoya, 2008; Cunillera, Toro, Sebastian-Galles, & Rodriguez-Fornells, 2006; De Diego-Balaguer, Toro, Rodriguez-Fornells, & Bachoud-Lévi, 2007). In the implicit task, this effect was apparent for the easy nonsense-words, while no effect was found for the hard nonsense-words. These results seem to suggest that SL under implicit and explicit conditions might rely on different learning mechanisms.
Abla, D., Katahira, K., & Okanoya, K., 2008. On-line assessment of statistical learning by event-related potentials. Journal of Cognitive Neuroscience, 20, 952–964.
Batterink, L., Reber, P. J., & Paller, K. A. (2015). Functional differences between statistical learning with and without explicit training. Learning and Memory, 22, 544–556.
Cunillera, T., Toro, J. M., Sebastian-Galles, N., & Rodriguez-Fornells, A. (2006). The effects of stress and statistical cues on continuous speech segmentation: An event-related brain potential study. Brain Research, 1123, 168–178.
De Diego Balaguer, R., Toro, J. M., Rodriguez-Fornells, A., & Bachoud-Lévi, A. (2007). Different neurophysiological mechanisms underlying word and rule extraction from speech. PLoSONE2:e1175.doi:10.1371/journal.pone.0001175
Romberg, A. R., & Saffran, J. R. (2010). Statistical learning and language acquisition. Wiley Interdisciplinary Reviews: Cognitive Science, 1(6), 906-914.
Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926–1928.
Saffran, J. R., Newport, E. L., Aslin, R. N., Tunick, R. A., & Barrueco, S. (1997). Incidental language learning. Psychological Science, 8, 101–105.
Saffran, J. R., & Kirkham, N. Z. (2018). Infant statistical learning. Annual Review of Psychology, 69, 181-203.