Response Accuracy and Response Time in Multisensory Localization
- Author (aut): Clayton, Colton
- Thesis advisor (ths): Zhou, Yi
- Committee member: Azuma, Tamiko
- Committee member: Daliri, Ayoub
- Publisher (pbl): Arizona State University
Aphasia is an acquired speech-language disorder that is brought upon because of post-stroke damage to the left hemisphere of the brain. Treatment for individuals with these speech production impairments can be challenging for clinicians because there is high variability in language recovery after stroke and lesion size does not predict language outcome (Lazar et al, 2008). It is also important to note that adequate integration between the sensory and motor systems is critical for many aspects of fluent speech and correcting speech errors. The present study seeks to investigate how delayed auditory-feedback paradigms, which alter the time scale of sensorimotor interactions in speech, might be useful in characterizing the speech production impairments in individuals with aphasia. To this end, six individuals with aphasia and nine age-matched control subjects were introduced to delayed auditory feedback at 4 different intervals during a sentence reading task. Our study found that the aphasia group generated more errors in 3 out of the 4 linguistic categories measured across all delay lengths, but that there was no significant main effect delay or interaction between group and delay. Acoustic analyses revealed variability among scores within the control and aphasia groups on all phoneme types. For example, acoustic analyses highlighted how the individual with conduction aphasia showed significantly longer amplitudes at all delays, and significantly larger duration at no delay, but that significance diminished as delay periods increased. Overall, this study suggests that delayed auditory feedback’s effects vary across individuals with aphasia and provides a base of research to be further built on by future testing of individuals with varying aphasia types and levels of severity.
When we produce speech movements, we expect a specific auditory consequence, but an error occurs when the predicted outcomes do not match the actual speech outcome. The brain notes these discrepancies, learns from the errors, and works to lower these errors. Previous studies have shown a relationship between speech motor learning and auditory targets. Subjects with smaller auditory targets were more sensitive to errors. These subjects estimated larger perturbations and generated larger responses. However, these responses were often ineffective, and the changes were usually minimal. The current study examined whether subjects’ auditory targets can be manipulated in an experimental setting. We recruited 10 healthy young adults to complete a perceptual vowel categorization task. We developed a novel procedure where subjects heard different auditory stimuli and reported the stimuli by locating the stimuli relative to adjacent vowels. We found that when stimuli are closer to vowel boundary, subjects are less accurate. Importantly, by providing visual feedback to subjects, subjects were able to improve their accuracy of locating the stimuli. These results indicated that we might be able to improve subjects’ auditory targets and thus may improve their speech motor learning ability.
Transcranial magnetic stimulation (TMS) is a non-invasive brain stimulation technique used in a variety of research settings, including speech neuroscience studies. However, one of the difficulties in using TMS for speech studies is the time that it takes to localize the lip motor cortex representation on the scalp. For my project, I used MATLAB to create a software package that facilitates the localization of the ‘hotspot’ for TMS studies in a systematic, reliable manner. The software sends TMS pulses at certain locations, collects electromyography (EMG) data, and extracts motor-evoked potentials (MEPs) to help users visualize the resulting muscle activation. In this way, users can systematically find the subject’s hotspot for TMS stimulation of the motor cortex. The hotspot detection software was found to be an effective and efficient improvement on previous localization methods.
Speech motor learning is important for learning to speak during childhood and maintaining the speech system throughout adulthood. Motor and auditory cortical regions play crucial roles in speech motor learning. This experiment aimed to use transcranial alternating current stimulation, a neurostimulation technique, to influence auditory and motor cortical activity. In this study, we used an auditory-motor adaptation task as an experimental model of speech motor learning. Subjects repeated words while receiving formant shifts, which made the subjects’ speech sound different from their production. During the adaptation task, subjects received Beta (20 Hz), Alpha (10 Hz), or Sham stimulation. We applied the stimulation to the ventral motor cortex that is involved in planning speech movements. We found that the stimulation did not influence the magnitude of adaptation. We suggest that some limitations of the study may have contributed to the negative results.
The purpose of this longitudinal study was to predict /r/ acquisition using acoustic signal processing. 19 children, aged 5-7 with inaccurate /r/, were followed until they turned 8 or acquired /r/, whichever came first. Acoustic and descriptive data from 14 participants were analyzed. The remaining 5 children continued to be followed. The study analyzed differences in spectral energy at the baseline acoustic signals of participants who eventually acquired /r/ compared to that of those who did not acquire /r/. Results indicated significant differences between groups in the baseline signals for vocalic and postvocalic /r/, suggesting that the acquisition of certain allophones may be predictable. Participants’ articulatory changes made during the progression of acquisition were also analyzed spectrally. A retrospective analysis described the pattern in which /r/ allophones were acquired, proposing that vocalic /r/ and the postvocalic variant of consonantal /r/ may be acquired prior to prevocalic /r/, and /r/ followed by low vowels may be acquired before /r/ followed by high vowels, although individual variations exist.
The distinctions between the neural resources supporting speech and music comprehension have long been studied using contexts like aphasia and amusia, and neuroimaging in control subjects. While many models have emerged to describe the different networks uniquely recruited in response to speech and music stimuli, there are still many questions, especially regarding left-hemispheric strokes that disrupt typical speech-processing brain networks, and how musical training might affect the brain networks recruited for speech after a stroke. Thus, our study aims to explore some questions related to the above topics. We collected task-based functional MRI data from 12 subjects who previously experienced a left-hemispheric stroke. Subjects listened to blocks of spoken sentences and novel piano melodies during scanning to examine the differences in brain activations in response to speech and music. We hypothesized that speech stimuli would activate right frontal regions, and music stimuli would activate the right superior temporal regions more than speech (both findings not seen in previous studies of control subjects), as a result of functional changes in the brain, following the left-hemispheric stroke and particularly the loss of functionality in the left temporal lobe. We also hypothesized that the music stimuli would cause a stronger activation in right temporal cortex for participants who have had musical training than those who have not. Our results indicate that speech stimuli compared to rest activated the anterior superior temporal gyrus bilaterally and activated the right inferior frontal lobe. Music stimuli compared to rest did not activate the brain bilaterally, but rather only activated the right middle temporal gyrus. When the group analysis was performed with music experience as a covariate, we found that musical training did not affect activations to music stimuli specifically, but there was greater right hemisphere activation in several regions in response to speech stimuli as a function of more years of musical training. The results of the study agree with our hypotheses regarding the functional changes in the brain, but they conflict with our hypothesis about musical expertise. Overall, the study has generated interesting starting points for further explorations of how musical neural resources may be recruited for speech processing after damage to typical language networks.
The brain continuously monitors speech output to detect potential errors between its sensory prediction and its sensory production (Daliri et al., 2020). When the brain encounters an error, it generates a corrective motor response, usually in the opposite direction, to reduce the effect of the error. Previous studies have shown that the type of auditory error received may impact a participant’s corrective response. In this study, we examined whether participants respond differently to categorical or non-categorical errors. We applied two types of perturbation in real-time by shifting the first formant (F1) and second formant (F2) at three different magnitudes. The vowel /ɛ/ was shifted toward the vowel /æ/ in the categorical perturbation condition. In the non-categorical perturbation condition, the vowel /ɛ/ was shifted to a sound outside of the vowel quadrilateral (increasing both F1 and F2). Our results showed that participants responded to the categorical perturbation while they did not respond to the non-categorical perturbation. Additionally, we found that in the categorical perturbation condition, as the magnitude of the perturbation increased, the magnitude of the response increased. Overall, our results suggest that the brain may respond differently to categorical and non-categorical errors, and the brain is highly attuned to errors in speech.