

That is, listeners who have heard sequences such as hor? would subsequently tend to interpret li? as lice rather than life, while those who have heard gira? would tend to recognize li? as life. What is learned during these exposure conditions is used to interpret previously unheard words. Lexically-biased exposure thus results in shifts in the perceptual/s/-/f/category boundary. Since horse and giraffe are words and horf and giras are not, the first group learns to perceive the ambiguous sound as/s/, and the second group learns that it is/f/.

In a typical experiment, an ambiguous segment midway between/s/and/f/(“?”) might appear either in sequences such as hor? or, for another group of listeners, in sequences such as gira?. Specifically, listeners can learn to interpret an ambiguous phoneme on the basis of disambiguating lexical contexts –. Perceptual learning studies show that speech processing in the listener's native language can be retuned by lexical knowledge. This is because subtitles in the language of the film indicate which words are being spoken, and so can boost speech learning about foreign speech sounds. Critically, the subtitles should be in Spanish, not English. How might she be able to cope better? We argue here that subtitles can help. She may have considerable difficulty understanding the European Spanish if she is unfamiliar with that language variety. Imagine a American listener, fluent in Mexican Spanish, watching El Laberinto del fauno. This situation arises, for example, when we watch a film in a second language. Listening difficulty is magnified when the unfamiliar regional accent is in a foreign language: The unusual foreign vowels and consonants may mismatch more with native sound categories, and may even fail to match any native category.

This is in part because the speech sounds of the accent mismatch those of the language standard (and/or with the listener's own accent). Listeners have difficulty understanding unfamiliar regional accents of their native language.
