Testing out creating speech in noise stimuli
Frost, Repp, and Katz (1988) conducted three experiments asking whether “speech perception can be influenced by simultaneous presentation of print?”
In experiment 1 they presented individual words in an audio format that was paired with a visual form. An audio word was presented with the matching visual word, a mismatching word, or neutral stimulus (XXXXXX). The spoken word was embedded in different levels of signal matched noise (or white noise, in Experiment 3). The results from experiment one showed that matching visual words changed response bias rather than sensitivity for detecting the presence or absence of words. There was also apparently a marked subjective impression of hearing words that were presented in enveloped matched masking noise when they were simultaneously paired with matching visual words.
I spun up this post to try creating example stimuli from E1 using the approach mentioned in the methods by Schroeder (1968).
The amplitude matched noise is created by randomly flipping the sign of each unit of the digitized waveform.