Spatial Release From Masking With a Moving Target

127823-Thumbnail Image.png
Description

In the visual domain, a stationary object that is difficult to detect usually becomes far more salient if it moves while the objects around it do not. This “pop out” effect is important for parsing the visual world into figure/ground

In the visual domain, a stationary object that is difficult to detect usually becomes far more salient if it moves while the objects around it do not. This “pop out” effect is important for parsing the visual world into figure/ground relationships that allow creatures to detect food, threats, etc. We tested for an auditory correlate to this visual effect by asking listeners to identify a single word, spoken by a female, embedded with two or four masking words spoken by males. Percentage correct scores were analyzed and compared between conditions where target and maskers were presented from the same position vs. when the target was presented from one position while maskers were presented from different positions. In some trials, the target word was moved across the speaker array using amplitude panning, while in other trials that target was played from a single, static position. Results showed a spatial release from masking for all conditions where the target and maskers were not located at the same position, but there was no statistically significant difference between identification performance when the target was moving vs. when it was stationary. These results suggest that, at least for short stimulus durations (0.75 s for the stimuli in this experiment), there is unlikely to be a “pop out” effect for moving target stimuli in the auditory domain as there is in the visual domain.

Date Created
2017-12-20
Agent

The Role of Visual Attention In Auditory Localization

135494-Thumbnail Image.png
Description
Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over

Hearing and vision are two senses that most individuals use on a daily basis. The simultaneous presentation of competing visual and auditory stimuli often affects our sensory perception. It is often believed that vision is the more dominant sense over audition in spatial localization tasks. Recent work suggests that visual information can influence auditory localization when the sound is emanating from a physical location or from a phantom location generated through stereophony (the so-called "summing localization"). The present study investigates the role of cross-modal fusion in an auditory localization task. The focuses of the experiments are two-fold: (1) reveal the extent of fusion between auditory and visual stimuli and (2) investigate how fusion is correlated with the amount of visual bias a subject experiences. We found that fusion often occurs when light flash and "summing localization" stimuli were presented from the same hemifield. However, little correlation was observed between the magnitude of visual bias and the extent of perceived fusion between light and sound stimuli. In some cases, subjects reported distinctive locations for light and sound and still experienced visual capture.
Date Created
2016-05
Agent

Interactions between Pitch and Timbre Perception in Normal-hearing Listeners and Cochlear Implant Users

134779-Thumbnail Image.png
Description
Pitch and timbre perception are two important dimensions of auditory perception. These aspects of sound aid the understanding of our environment, and contribute to normal everyday functioning. It is therefore important to determine the nature of perceptual interaction between these

Pitch and timbre perception are two important dimensions of auditory perception. These aspects of sound aid the understanding of our environment, and contribute to normal everyday functioning. It is therefore important to determine the nature of perceptual interaction between these two dimensions of sound. This study tested the interactions between pitch perception associated with the fundamental frequency (F0) and sharpness perception associated with the spectral slope of harmonic complex tones in normal hearing (NH) listeners and cochlear implant (CI) users. Pitch and sharpness ranking was measured without changes in the non-target dimension (Experiment 1), with different amounts of unrelated changes in the non-target dimension (Experiment 2), and with congruent/incongruent changes of similar perceptual salience in the non-target dimension (Experiment 3). The results showed that CI users had significantly worse pitch and sharpness ranking thresholds than NH listeners. Pitch and sharpness perception had symmetric interactions in NH listeners. However, for CI users, spectral slope changes significantly affected pitch ranking, while F0 changes had no significant effect on sharpness ranking. CI users' pitch ranking sensitivity was significantly better with congruent than with incongruent spectral slope changes. These results have important implications for CI processing strategies to better transmit pitch and timbre cues to CI users.
Date Created
2016-12
Agent

Two-Sentence Recognition with a Pulse Train Vocoder

137669-Thumbnail Image.png
Description
When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A

When listeners hear sentences presented simultaneously, the listeners are better able to discriminate between speakers when there is a difference in fundamental frequency (F0). This paper explores the use of a pulse train vocoder to simulate cochlear implant listening. A pulse train vocoder, rather than a noise or tonal vocoder, was used so the fundamental frequency (F0) of speech would be well represented. The results of this experiment showed that listeners are able to use the F0 information to aid in speaker segregation. As expected, recognition performance is the poorest when there was no difference in F0 between speakers, and listeners performed better as the difference in F0 increased. The type of errors that the listeners made was also analyzed. The results show that when an error was made in identifying the correct word from the target sentence, the response was usually (~60%) a word that was uttered in the competing sentence.
Date Created
2013-05
Agent

Effects of Loudness Change on Tempo Perception and Action in Percussion

137094-Thumbnail Image.png
Description
Tempo control is a crucial part of musicianship that can provide an obstacle for novice musicians. The current study examines why novice percussionists increase their playing tempo when they increase their loudness (in music, loudness is referred to as dynamics).

Tempo control is a crucial part of musicianship that can provide an obstacle for novice musicians. The current study examines why novice percussionists increase their playing tempo when they increase their loudness (in music, loudness is referred to as dynamics). This study tested five hypotheses: 1) As actual tempo changes, listeners perceive that the tempo is changing; 2) There is a perceptual bias to perceive increases in acoustic intensity as also increasing in tempo; 3) All individuals, regardless of percussion experience, display the bias described in hypothesis 2; 4) Unskilled or non-percussionists increase or decrease produced tempo as they respectively increase or decrease loudness; and 5) Skilled percussionist produce less change in tempo due to changes in loudness than non-percussionists. In Experiment 1, percussionists and non-percussionists listened to metronome samples that gradually change in intensity and/or tempo. Participants identified the direction and size of their perceived tempo change using a computer mouse. In Experiment 2, both groups of participants produced various tempo and dynamic changes on a drum pad. Our findings support that both percussionists and non-percussionists, to some extent, display a perceptual bias to perceive tempo changes as a function of intensity changes. We also found that non-percussionists altered their tempo as a function of changing dynamic levels, whereas percussionists did not. Overall, our findings support that listeners tend to experience some integrality between perceptual dimensions of perceived tempo and loudness. Dimensional integration also persists when playing percussion instruments though experience with percussion instruments reduces this effect.
Date Created
2014-05
Agent

Sound Source Localization by Hearing Preservation Patients With and Without Symmetrical Low-Frequency Acoustic Hearing

129166-Thumbnail Image.png
Description

The aim of this article was to study sound source localization by cochlear implant (CI) listeners with low-frequency (LF) acoustic hearing in both the operated ear and in the contralateral ear. Eight CI listeners had symmetrical LF acoustic hearing and

The aim of this article was to study sound source localization by cochlear implant (CI) listeners with low-frequency (LF) acoustic hearing in both the operated ear and in the contralateral ear. Eight CI listeners had symmetrical LF acoustic hearing and 4 had asymmetrical LF acoustic hearing. The effects of two variables were assessed: (i) the symmetry of the LF thresholds in the two ears and (ii) the presence/absence of bilateral acoustic amplification. Stimuli consisted of low-pass, high-pass, and wideband noise bursts presented in the frontal horizontal plane. Localization accuracy was 23° of error for the symmetrical listeners and 76° of error for the asymmetrical listeners. The presence of a unilateral CI used in conjunction with bilateral LF acoustic hearing does not impair sound source localization accuracy, but amplification for acoustic hearing can be detrimental to sound source localization accuracy.

Date Created
2014-11-30
Agent

Dynamic spatial hearing by human and robot listeners

153418-Thumbnail Image.png
Description
This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented

This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.
Date Created
2015
Agent

Using swept tones to evoke stimulus frequency otoacoustic emissions with in-situ calibration

150688-Thumbnail Image.png
Description
Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening

Otoacoustic emissions (OAEs) are soft sounds generated by the inner ear and can be recorded within the ear canal. Since OAEs can reflect the functional status of the inner ear, OAE measurements have been widely used for hearing loss screening in the clinic. However, there are limitations in current clinical OAE measurements, such as the restricted frequency range, low efficiency and inaccurate calibration. In this dissertation project, a new method of OAE measurement which used a swept tone to evoke the stimulus frequency OAEs (SFOAEs) was developed to overcome the limitations of current methods. In addition, an in-situ calibration was applied to equalize the spectral level of the swept-tone stimulus at the tympanic membrane (TM). With this method, SFOAEs could be recorded with high resolution over a wide frequency range within one or two minutes. Two experiments were conducted to verify the accuracy of the in-situ calibration and to test the performance of the swept-tone SFOAEs. In experiment I, the calibration of the TM sound pressure was verified in both acoustic cavities and real ears by using a second probe microphone. In addition, the benefits of the in-situ calibration were investigated by measuring OAEs under different calibration conditions. Results showed that the TM pressure could be predicted correctly, and the in-situ calibration provided the most reliable results in OAE measurements. In experiment II, a three-interval paradigm with a tracking-filter technique was used to record the swept-tone SFOAEs in 20 normal-hearing subjects. The test-retest reliability of the swept-tone SFOAEs was examined using a repeated-measure design under various stimulus levels and durations. The accuracy of the swept-tone method was evaluated by comparisons with a standard method using discrete pure tones. Results showed that SFOAEs could be reliably and accurately measured with the swept-tone method. Comparing with the pure-tone approach, the swept-tone method showed significantly improved efficiency. The swept-tone SFOAEs with in-situ calibration may be an alternative of current clinical OAE measurements for more detailed evaluation of inner ear function and accurate diagnosis.
Date Created
2012
Agent