The Effects of Story Champs and Puente de Cuentos on the Maze Usage and Story Retell Abilities of Bilingual Preschoolers

135491-Thumbnail Image.png
Description
In this pilot study, the purpose was to determine if certain language interventions could help bilingual children reduce maze use and improve their story retell abilities. We used language intervention, Story Champs, and its Spanish version, Puente de Cuentos to

In this pilot study, the purpose was to determine if certain language interventions could help bilingual children reduce maze use and improve their story retell abilities. We used language intervention, Story Champs, and its Spanish version, Puente de Cuentos to help bilingual children improve their story retell abilities. We conducted the intervention over the course of three days in both Spanish and English. The children participated in three stories in each language each day. They also received a narrative measure before and after the intervention to measure gains in story ability and to measure maze use. Results of the study indicated that there were no statistically-significant differences in the children's story retell abilities or maze use before and after the intervention. Nevertheless, we are encouraged by our results for future further study because of some improvements the children made.
Date Created
2016-05
Agent

Ultrasound Imaging of Swallowing Subsequent to Feeding and Myofunctional Intervention

135446-Thumbnail Image.png
Description
The purpose of this study was to examine swallowing patterns using ultrasound technology subsequent to the implementation of two therapeutic interventions. Baseline swallow patterns were compared to swallows after implementation of therapeutic interventions common in both feeding therapy (FT) and

The purpose of this study was to examine swallowing patterns using ultrasound technology subsequent to the implementation of two therapeutic interventions. Baseline swallow patterns were compared to swallows after implementation of therapeutic interventions common in both feeding therapy (FT) and orofacial myofunctional therapy (OMT). The interventions consist of stimulation of the tongue by z-vibe and tongue pops. Changes in swallowing patterns are described, and similarities of interventions across the two professions are discussed. Ultrasound research in the realm of swallowing is sparse despite having potential clinical application in both professions. In using ultrasound, this study outlines a protocol for utilization of a hand-held probe and reinforces a particular protocol described in the literature. Real-time ultrasound recordings of swallows for 19 adult female subjects were made. Participants with orofacial myofunctional disorder are compared to a group with typical swallowing and differences in swallowing patterns are described. Three stages of the oral phase of the swallow were assigned based on ultrasonic observation of the tongue shape. Analysis involves total duration of the swallow, duration of the three stages in relation to the total duration of the swallow, and the number of swallows required for the bolus to be cleared from the oral cavity. No significant effects of either intervention were found. Swallowing patterns showed a general trend to become faster in total duration subsequent to each intervention. An unexpected finding showed significant changes in the relationship between the bolus preparation stage and the bolus transportation stage when comparing the group classified as having a single swallow and the group classified as having multiple swallows.
Date Created
2016-05
Agent

The neurobiology of sentence comprehension: an fMRI study of late American Sign Language acquisition

135399-Thumbnail Image.png
Description
Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly

Language acquisition is a phenomenon we all experience, and though it is well studied many questions remain regarding the neural bases of language. Whether a hearing speaker or Deaf signer, spoken and signed language acquisition (with eventual proficiency) develop similarly and share common neural networks. While signed language and spoken language engage completely different sensory modalities (visual-manual versus the more common auditory-oromotor) both languages share grammatical structures and contain syntactic intricacies innate to all languages. Thus, studies of multi-modal bilingualism (e.g. a native English speaker learning American Sign Language) can lead to a better understanding of the neurobiology of second language acquisition, and of language more broadly. For example, can the well-developed visual-spatial processing networks in English speakers support grammatical processing in sign language, as it relies heavily on location and movement? The present study furthers the understanding of the neural correlates of second language acquisition by studying late L2 normal hearing learners of American Sign Language (ASL). Twenty English speaking ASU students enrolled in advanced American Sign Language coursework participated in our functional Magnetic Resonance Imaging (fMRI) study. The aim was to identify the brain networks engaged in syntactic processing of ASL sentences in late L2 ASL learners. While many studies have addressed the neurobiology of acquiring a second spoken language, no previous study to our knowledge has examined the brain networks supporting syntactic processing in bimodal bilinguals. We examined the brain networks engaged while perceiving ASL sentences compared to ASL word lists, as well as written English sentences and word lists. We hypothesized that our findings in late bimodal bilinguals would largely coincide with the unimodal bilingual literature, but with a few notable differences including additional attention networks being engaged by ASL processing. Our results suggest that there is a high degree of overlap in sentence processing networks for ASL and English. There also are important differences in regards to the recruitment of speech comprehension, visual-spatial and domain-general brain networks. Our findings suggest that well-known sentence comprehension and syntactic processing regions for spoken languages are flexible and modality-independent.
Date Created
2016-05
Agent

Self-Reported Cognitive Symptoms in Military Veteran College Students

135362-Thumbnail Image.png
Description
An increasing number of military veterans are enrolling in college, primarily due to the Post-9/11 GI Bill, which provides educational benefits to veterans who served on active duty since September 11, 2001. With rigorous training, active combat situations, and exposure

An increasing number of military veterans are enrolling in college, primarily due to the Post-9/11 GI Bill, which provides educational benefits to veterans who served on active duty since September 11, 2001. With rigorous training, active combat situations, and exposure to unexpected situations, the veteran population is at a higher risk for traumatic brain injury (TBI), Post Traumatic Stress Disorder (PTSD), and depression. All of these conditions are associated with cognitive consequences, including attention deficits, working memory problems, and episodic memory impairments. Some conditions, particularly mild TBI, are not diagnosed or treated until long after the injury when the person realizes they have cognitive difficulties. Even mild cognitive problems can hinder learning in an academic setting, but there is little data on the frequency and severity of cognitive deficits in veteran college students. The current study examines self-reported cognitive symptoms in veteran students compared to civilian students and how those symptoms relate to service-related conditions. A better understanding of the pattern of self-reported symptoms will help researchers and clinicians determine the veterans who are at higher risk for cognitive and academic difficulties.
Date Created
2016-05
Agent

Coarticulation: Testing the Universality of Glide Epenthesis, Stop Epenthesis, and Intervocalic Voicing of Stops

135077-Thumbnail Image.png
Description
The objective of this study was to examine the universality of three coarticulatory processes: glide epenthesis, stop epenthesis, and intervocalic voicing of stops. Five contrastive languages were selected to test these processes. These languages included English, Spanish, Mandarin, Arabic, and

The objective of this study was to examine the universality of three coarticulatory processes: glide epenthesis, stop epenthesis, and intervocalic voicing of stops. Five contrastive languages were selected to test these processes. These languages included English, Spanish, Mandarin, Arabic, and Navajo. All languages varied in phonemic inventory, stress patterns, phonological processes, and syllabic constructs. 16 participants were selected with relatively limited English exposure based on questionnaire responses regarding their language history. The participants went through a series of trainings and tasks to elicit these coarticulatory processes in several phonemic contexts. Part 1 of the study attempted to elicit the processes solely through imitation, while Part 2 attempted to do so through a spontaneous elicitation task. Although the results indicated that a universal use of these processes was not supported, the data suggested that glide epenthesis played a frequent role within English, Spanish, and Arabic. This was expected since glides are often used in the presence of diphthongs in these languages. Additionally, intervocalic voicing of stops was observed in English and Spanish, suggesting a language specific tendency. However, it was only noted when the voiceless stop occurred in the coda of the syllable and not in the onset of the syllable. Lastly, the use of stop epenthesis was not observed in any of the languages tested.
Date Created
2016-12
Agent

Testing the Limits of a Reading Comprehension Intervention

134948-Thumbnail Image.png
Description
This study investigates whether children who are Dual Language Learners (DLLs) and who have poor reading comprehension will benefit from participating in the EMBRACE intervention. The reading comprehension program is based on the Theory of Embodied Cognition, which focuses on

This study investigates whether children who are Dual Language Learners (DLLs) and who have poor reading comprehension will benefit from participating in the EMBRACE intervention. The reading comprehension program is based on the Theory of Embodied Cognition, which focuses on the embodied nature of language comprehension. Our understanding of language is based on mental representations that we create through experiences and are integrated with according sensorimotor information. Therefore, by engaging the motor and language system through reading stories on an iPad that prompt the children to manipulate images on-screen, we might improve children's reading strategies and comprehension scores. Fifty-six children participated in reading three stories and answering related questions over a period of two weeks. Results showed that the intervention was successful in increasing reading comprehension scores in the physical manipulation condition but not in the imaginary manipulation condition. Although lower motor skill scores positively correlated with lower comprehension skills, the children's motor deficits did not moderate their performance on the intervention.
Date Created
2016-12
Agent

Interactions between Pitch and Timbre Perception in Normal-hearing Listeners and Cochlear Implant Users

134779-Thumbnail Image.png
Description
Pitch and timbre perception are two important dimensions of auditory perception. These aspects of sound aid the understanding of our environment, and contribute to normal everyday functioning. It is therefore important to determine the nature of perceptual interaction between these

Pitch and timbre perception are two important dimensions of auditory perception. These aspects of sound aid the understanding of our environment, and contribute to normal everyday functioning. It is therefore important to determine the nature of perceptual interaction between these two dimensions of sound. This study tested the interactions between pitch perception associated with the fundamental frequency (F0) and sharpness perception associated with the spectral slope of harmonic complex tones in normal hearing (NH) listeners and cochlear implant (CI) users. Pitch and sharpness ranking was measured without changes in the non-target dimension (Experiment 1), with different amounts of unrelated changes in the non-target dimension (Experiment 2), and with congruent/incongruent changes of similar perceptual salience in the non-target dimension (Experiment 3). The results showed that CI users had significantly worse pitch and sharpness ranking thresholds than NH listeners. Pitch and sharpness perception had symmetric interactions in NH listeners. However, for CI users, spectral slope changes significantly affected pitch ranking, while F0 changes had no significant effect on sharpness ranking. CI users' pitch ranking sensitivity was significantly better with congruent than with incongruent spectral slope changes. These results have important implications for CI processing strategies to better transmit pitch and timbre cues to CI users.
Date Created
2016-12
Agent

Accurate Articulation of /r/: Relationships between Signal Processing Analysis of Speech and Ultrasound Images of the Tongue

134576-Thumbnail Image.png
Description
Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the

Research on /r/ production previously used formant analysis as the primary acoustic analysis, with particular focus on the low third formant in the speech signal. Prior imaging of speech used X-Ray, MRI, and electromagnetic midsagittal articulometer systems. More recently, the signal processing technique of Mel-log spectral plots has been used to study /r/ production in children and female adults. Ultrasound imaging of the tongue also has been used to image the tongue during speech production in both clinical and research settings. The current study attempts to describe /r/ production in three different allophonic contexts; vocalic, prevocalic, and postvocalic positions. Ultrasound analysis, formant analysis, Mel-log spectral plots, and /r/ duration were measured for /r/ production in 29 adult speakers (10 male, 19 female). A possible relationship between these variables was also explored. Results showed that the amount of superior constriction in the postvocalic /r/ allophone was significantly lower than the other /r/ allophones. Formant two was significantly lower and the distance between formant two and three was significantly higher for the prevocalic /r/ allophone. Vocalic /r/ had the longest average duration, while prevocalic /r/ had the shortest duration. Signal processing results revealed candidate Mel-bin values for accurate /r/ production for each allophone of /r/. The results indicate that allophones of /r/ can be distinguished based the different analyses. However, relationships between these analyses are still unclear. Future research is needed in order to gather more data on /r/ acoustics and articulation in order to find possible relationships between the analyses for /r/ production.
Date Created
2017-05
Agent

Student-To-Student Anatomy Volume 1: Heart, Lungs, ENT

134531-Thumbnail Image.png
Description
Student to Student: A Guide to Anatomy is an anatomy guide written by students, for students. Its focus is on teaching the anatomy of the heart, lungs, nose, ears and throat in a manner that isn't overpowering or stress inducing.

Student to Student: A Guide to Anatomy is an anatomy guide written by students, for students. Its focus is on teaching the anatomy of the heart, lungs, nose, ears and throat in a manner that isn't overpowering or stress inducing. Daniel and I have taken numerous anatomy courses, and fully comprehend what it takes to have success in these classes. We found that the anatomy books recommended for these courses are often completely overwhelming, offering way more information than what is needed. This renders them near useless for a college student who just wants to learn the essentials. Why would a student even pick it up if they can't find what they need to learn? With that in mind, our goal was to create a comprehensive, easy to understand, and easy to follow guide to the heart, lungs and ENT (ear nose throat). We know what information is vital for test day, and wanted to highlight these key concepts and ideas in our guide. Spending just 60 to 90 minutes studying our guide should help any student with their studying needs. Whether the student has medical school aspirations, or if they simply just want to pass the class, our guide is there for them. We aren't experts, but we know what strategies and methods can help even the most confused students learn. Our guide can also be used as an introductory resource to our respective majors (Daniel-Biology, Charles-Speech and Hearing) for students who are undecided on what they want to do. In the future Daniel and I would like to see more students creating similar guides, and adding onto the "Student to Student' title with their own works... After all, who better to teach students than the students who know what it takes?
Date Created
2017-05
Agent

Examining the Equivalence of Traditional vs. Automated Speech Perception Testing in Adult Listeners with Normal Hearing

134484-Thumbnail Image.png
Description
The purpose of the present study was to determine if an automated speech perception task yields results that are equivalent to a word recognition test used in audiometric evaluations. This was done by testing 51 normally hearing adults using a

The purpose of the present study was to determine if an automated speech perception task yields results that are equivalent to a word recognition test used in audiometric evaluations. This was done by testing 51 normally hearing adults using a traditional word recognition task (NU-6) and an automated Non-Word Detection task. Stimuli for each task were presented in quiet as well as in six signal-to-noise ratios (SNRs) increasing in 3 dB increments (+0 dB, +3 dB, +6 dB, +9 dB, + 12 dB, +15 dB). A two one-sided test procedure (TOST) was used to determine equivalency of the two tests. This approach required the performance for both tasks to be arcsine transformed and converted to z-scores in order to calculate the difference in scores across listening conditions. These values were then compared to a predetermined criterion to establish if equivalency exists. It was expected that the TOST procedure would reveal equivalency between the traditional word recognition task and the automated Non-Word Detection Task. The results confirmed that the two tasks differed by no more than 2 test items in any of the listening conditions. Overall, the results indicate that the automated Non-Word Detection task could be used in addition to, or in place of, traditional word recognition tests. In addition, the features of an automated test such as the Non-Word Detection task offer additional benefits including rapid administration, accurate scoring, and supplemental performance data (e.g., error analyses) beyond those obtained in traditional speech perception measures.
Date Created
2017-05
Agent