Interactions between Pitch and Timbre Perception in Normal-hearing Listeners and Cochlear Implant Users

134779-Thumbnail Image.png
Description
Pitch and timbre perception are two important dimensions of auditory perception. These aspects of sound aid the understanding of our environment, and contribute to normal everyday functioning. It is therefore important to determine the nature of perceptual interaction between these

Pitch and timbre perception are two important dimensions of auditory perception. These aspects of sound aid the understanding of our environment, and contribute to normal everyday functioning. It is therefore important to determine the nature of perceptual interaction between these two dimensions of sound. This study tested the interactions between pitch perception associated with the fundamental frequency (F0) and sharpness perception associated with the spectral slope of harmonic complex tones in normal hearing (NH) listeners and cochlear implant (CI) users. Pitch and sharpness ranking was measured without changes in the non-target dimension (Experiment 1), with different amounts of unrelated changes in the non-target dimension (Experiment 2), and with congruent/incongruent changes of similar perceptual salience in the non-target dimension (Experiment 3). The results showed that CI users had significantly worse pitch and sharpness ranking thresholds than NH listeners. Pitch and sharpness perception had symmetric interactions in NH listeners. However, for CI users, spectral slope changes significantly affected pitch ranking, while F0 changes had no significant effect on sharpness ranking. CI users' pitch ranking sensitivity was significantly better with congruent than with incongruent spectral slope changes. These results have important implications for CI processing strategies to better transmit pitch and timbre cues to CI users.
Date Created
2016-12
Agent

Two-Sentence Recognition with a Pulse Train Vocoder

137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
Date Created
2013-05
Agent

Vowel Normalization in Dysarthria

137447-Thumbnail Image.png
Description
In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification

In this study, the Bark transform and Lobanov method were used to normalize vowel formants in speech produced by persons with dysarthria. The computer classification accuracy of these normalized data were then compared to the results of human perceptual classification accuracy of the actual vowels. These results were then analyzed to determine if these techniques correlated with the human data.
Date Created
2013-05
Agent

Does Self-Construal Influence How People Use Social Networking Sites?

136639-Thumbnail Image.png
Description
Social Networking Sites (SNSs), such as Facebook and Twitter, have continued to gain popularity worldwide. Previous research has shown differences in online behaviors at the cultural level, namely between predominantly independent societies, such as the United States, and predominantly interdependent

Social Networking Sites (SNSs), such as Facebook and Twitter, have continued to gain popularity worldwide. Previous research has shown differences in online behaviors at the cultural level, namely between predominantly independent societies, such as the United States, and predominantly interdependent societies, such as China and Japan. In the current study I sought to test whether self-construal was correlated with different ways of using SNSs and whether there might be SES differences within the US that were analogous to previously observed cross-cultural differences in SNS use. Higher levels of interdependence were linked with using SNSs to keep in touch with family and friends, and providing social support to others. Interdependence was also correlated with Facebook addiction scale scores, using SNSs in inappropriate situations, and overall SNS use. Implications for assessing risk for Internet addiction, as well as understanding cultural variations in prevalence of Internet addiction are discussed.
Date Created
2015-05
Agent

Tracking sonic flows during fast head movements of marmoset monkeys

135844-Thumbnail Image.png
Description
Head turning is a common sound localization strategy in primates. A novel system that can track head movement and acoustic signals received at the entrance to the ear canal was tested to obtain binaural sound localization information during fast head

Head turning is a common sound localization strategy in primates. A novel system that can track head movement and acoustic signals received at the entrance to the ear canal was tested to obtain binaural sound localization information during fast head movement of marmoset monkey. Analysis of binaural information was conducted with a focus on inter-aural level difference (ILD) and inter-aural time difference (ITD) at various head positions over time. The results showed that during fast head turns, the ITDs showed significant and clear changes in trajectory in response to low frequency stimuli. However, significant phase ambiguity occurred at frequencies greater than 2 kHz. Analysis of ITD and ILD information with animal vocalization as the stimulus was also tested. The results indicated that ILDs may provide more information in understanding the dynamics of head movement in response to animal vocalizations in the environment. The primary significance of this experimentation is the successful implementation of a system capable of simultaneously recording head movement and acoustic signals at the ear canals. The collected data provides insight into the usefulness of ITD and ILD as binaural cues during head movement.
Date Created
2016-05
Agent

Variability of early literacy skills In children with hearing impairment

155459-Thumbnail Image.png
Description
Children with hearing impairment are at risk for poor attainment in reading decoding and reading comprehension, which suggests they may have difficulty with early literacy skills prior to learning to read. The first purpose of this study was to determine

Children with hearing impairment are at risk for poor attainment in reading decoding and reading comprehension, which suggests they may have difficulty with early literacy skills prior to learning to read. The first purpose of this study was to determine if young children with hearing impairment differ from their peers with normal hearing on early literacy skills and also on three known predictors of early literacy skills – non-verbal cognition, executive functioning, and home literacy environment. A second purpose was to determine if strengths and weaknesses in early literacy skills of individual children with hearing impairment are associated with degree of hearing loss, non-verbal cognitive ability, or executive functioning.

I assessed seven children with normal hearing and 10 children with hearing impairment on assessments of expressive vocabulary, expressive morphosyntax, listening comprehension, phonological awareness, alphabet knowledge, non-verbal cognition, and executive functioning. Two children had unilateral hearing loss, two had mild hearing loss and used hearing aids, two had moderate hearing loss and used hearing aids, one child had mild hearing loss and did not use hearing aids, and three children used bilateral cochlear implants. Parents completed a questionnaire about their home literacy environment.

Findings showed large between-group effect sizes for phonological awareness, morphosyntax, and executive functioning, and medium between-group effect sizes for expressive vocabulary, listening comprehension, and non-verbal cognition. Visual analyses provided no clear pattern to suggest that non-verbal cognition or degree of hearing loss were associated with individual patterns of performance for children with hearing impairment; however, three children who seemed at risk for reading difficulties had executive functioning scores that were at the floor.

Most prekindergarten and kindergarten children with hearing impairment in this study appeared to be at risk for future reading decoding and reading comprehension difficulties. Further, based on individual patterns of performance, risk was not restricted to one type of early literacy skill and a strength in one skill did not necessarily indicate a child would have strengths in all early literacy skills. Therefore, it is essential to evaluate all early literacy skills to pinpoint skill deficits and to prioritize intervention goals.
Date Created
2017
Agent

Sound Source Localization by Hearing Preservation Patients With and Without Symmetrical Low-Frequency Acoustic Hearing

129166-Thumbnail Image.png
Description

The aim of this article was to study sound source localization by cochlear implant (CI) listeners with low-frequency (LF) acoustic hearing in both the operated ear and in the contralateral ear. Eight CI listeners had symmetrical LF acoustic hearing and

The aim of this article was to study sound source localization by cochlear implant (CI) listeners with low-frequency (LF) acoustic hearing in both the operated ear and in the contralateral ear. Eight CI listeners had symmetrical LF acoustic hearing and 4 had asymmetrical LF acoustic hearing. The effects of two variables were assessed: (i) the symmetry of the LF thresholds in the two ears and (ii) the presence/absence of bilateral acoustic amplification. Stimuli consisted of low-pass, high-pass, and wideband noise bursts presented in the frontal horizontal plane. Localization accuracy was 23° of error for the symmetrical listeners and 76° of error for the asymmetrical listeners. The presence of a unilateral CI used in conjunction with bilateral LF acoustic hearing does not impair sound source localization accuracy, but amplification for acoustic hearing can be detrimental to sound source localization accuracy.

Date Created
2014-11-30
Agent

Audiovisual sentence recognition in bimodal and bilateral cochlear implant users

153453-Thumbnail Image.png
Description
The present study describes audiovisual sentence recognition in normal hearing listeners, bimodal cochlear implant (CI) listeners and bilateral CI listeners. This study explores a new set of sentences (the AzAV sentences) that were created to have equal auditory intelligibility and

The present study describes audiovisual sentence recognition in normal hearing listeners, bimodal cochlear implant (CI) listeners and bilateral CI listeners. This study explores a new set of sentences (the AzAV sentences) that were created to have equal auditory intelligibility and equal gain from visual information.

The aims of Experiment I were to (i) compare the lip reading difficulty of the AzAV sentences to that of other sentence materials, (ii) compare the speech-reading ability of CI listeners to that of normal-hearing listeners and (iii) assess the gain in speech understanding when listeners have both auditory and visual information from easy-to-lip-read and difficult-to-lip read sentences. In addition, the sentence lists were subjected to a multi-level text analysis to determine the factors that make sentences easy or difficult to speech read.

The results of Experiment I showed that (i) the AzAV sentences were relatively difficult to lip read, (ii) that CI listeners and normal-hearing listeners did not differ in lip reading ability and (iii) that sentences with low lip-reading intelligibility (10-15 % correct) provide about a 30 percentage point improvement in speech understanding when added to the acoustic stimulus, while sentences with high lip-reading intelligibility (30-60 % correct) provide about a 50 percentage point improvement in the same comparison. The multi-level text analyses showed that the familiarity of phrases in the sentences was the primary driving factor that affects the lip reading difficulty.

The aim of Experiment II was to investigate the value, when visual information is present, of bimodal hearing and bilateral cochlear implants. The results of Experiment II showed that when visual information is present, low-frequency acoustic hearing can be of value to speech understanding for patients fit with a single CI. However, when visual information was available no gain was seen from the provision of a second CI, i.e., bilateral CIs. As was the case in Experiment I, visual information provided about a 30 percentage point improvement in speech understanding.
Date Created
2015
Agent

The impact of visual input on the ability of bilateral and bimodal cochlear implant users to accurately perceive words and phonemes in experimental phrases

153419-Thumbnail Image.png
Description
A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their

A multitude of individuals across the globe suffer from hearing loss and that number continues to grow. Cochlear implants, while having limitations, provide electrical input for users enabling them to "hear" and more fully interact socially with their environment. There has been a clinical shift to the bilateral placement of implants in both ears and to bimodal placement of a hearing aid in the contralateral ear if residual hearing is present. However, there is potentially more to subsequent speech perception for bilateral and bimodal cochlear implant users than the electric and acoustic input being received via these modalities. For normal listeners vision plays a role and Rosenblum (2005) points out it is a key feature of an integrated perceptual process. Logically, cochlear implant users should also benefit from integrated visual input. The question is how exactly does vision provide benefit to bilateral and bimodal users. Eight (8) bilateral and 5 bimodal participants received randomized experimental phrases previously generated by Liss et al. (1998) in auditory and audiovisual conditions. The participants recorded their perception of the input. Data were consequently analyzed for percent words correct, consonant errors, and lexical boundary error types. Overall, vision was found to improve speech perception for bilateral and bimodal cochlear implant participants. Each group experienced a significant increase in percent words correct when visual input was added. With vision bilateral participants reduced consonant place errors and demonstrated increased use of syllabic stress cues used in lexical segmentation. Therefore, results suggest vision might provide perceptual benefits for bilateral cochlear implant users by granting access to place information and by augmenting cues for syllabic stress in the absence of acoustic input. On the other hand vision did not provide the bimodal participants significantly increased access to place and stress cues. Therefore the exact mechanism by which bimodal implant users improved speech perception with the addition of vision is unknown. These results point to the complexities of audiovisual integration during speech perception and the need for continued research regarding the benefit vision provides to bilateral and bimodal cochlear implant users.
Date Created
2015
Agent

Dynamic spatial hearing by human and robot listeners

153418-Thumbnail Image.png
Description
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented

This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
Date Created
2015
Agent