The mismatch between top-down predicted and bottom-up perceptual input is an important mechanism of perception according to the predictive coding framework (Friston, ). In this paper we develop and validate a new information-theoretic measure that quantifies the mismatch between expected and observed auditory input during speech processing. We argue that such a mismatch measure is useful for the study of speech processing. To compute the mismatch measure, we use naturalistic speech materials containing approximately 50,000 word tokens. For each word token we first estimate the prior word probability distribution with the aid of statistical language modelling, and next use automatic speech recognition to update this word probability distribution based on the unfolding speech signal. We validate the mismatch measure with multiple analyses, and show that the auditory-based update improves the probability of the correct word and lowers the uncertainty of the word probability distribution. Based on these results, we argue that it is possible to explicitly estimate the mismatch between predicted and perceived speech input with the cross entropy between word expectations computed before and after an auditory update.
|Title of host publication||Proc. Interspeech 2019|
|Publication status||Published - 18 Sep 2019|
Bentum, M., ten Bosch, L., van den Bosch, A., & Ernestus, M. (2019). Quantifying Expectation Modulation in Human Speech Processing. In Proc. Interspeech 2019 (pp. 2270-2274) https://doi.org/10.21437/Interspeech.2019-2685