Using Goodness of Pronunciation Features for Spoken Nasality Detection

133225-Thumbnail Image.png
Description
Speech nasality disorders are characterized by abnormal resonance in the nasal cavity. Hypernasal speech is of particular interest, characterized by an inability to prevent improper nasalization of vowels, and poor articulation of plosive and fricative consonants, and can lead to

Speech nasality disorders are characterized by abnormal resonance in the nasal cavity. Hypernasal speech is of particular interest, characterized by an inability to prevent improper nasalization of vowels, and poor articulation of plosive and fricative consonants, and can lead to negative communicative and social consequences. It can be associated with a range of conditions, including cleft lip or palate, velopharyngeal dysfunction (a physical or neurological defective closure of the soft palate that regulates resonance between the oral and nasal cavity), dysarthria, or hearing impairment, and can also be an early indicator of developing neurological disorders such as ALS. Hypernasality is typically scored perceptually by a Speech Language Pathologist (SLP). Misdiagnosis could lead to inadequate treatment plans and poor treatment outcomes for a patient. Also, for some applications, particularly screening for early neurological disorders, the use of an SLP is not practical. Hence this work demonstrates a data-driven approach to objective assessment of hypernasality, through the use of Goodness of Pronunciation features. These features capture the overall precision of articulation of speaker on a phoneme-by-phoneme basis, allowing demonstrated models to achieve a Pearson correlation coefficient of 0.88 on low-nasality speakers, the population of most interest for this sort of technique. These results are comparable to milestone methods in this domain.
Date Created
2018-05
Agent

A Novel Battery Management & Charging Solution for Autonomous UAV Systems

156281-Thumbnail Image.png
Description
Currently, one of the biggest limiting factors for long-term deployment of autonomous systems is the power constraints of a platform. In particular, for aerial robots such as unmanned aerial vehicles (UAVs), the energy resource is the main driver of mission

Currently, one of the biggest limiting factors for long-term deployment of autonomous systems is the power constraints of a platform. In particular, for aerial robots such as unmanned aerial vehicles (UAVs), the energy resource is the main driver of mission planning and operation definitions, as everything revolved around flight time. The focus of this work is to develop a new method of energy storage and charging for autonomous UAV systems, for use during long-term deployments in a constrained environment. We developed a charging solution that allows pre-equipped UAV system to land on top of designated charging pads and rapidly replenish their battery reserves, using a contact charging point. This system is designed to work with all types of rechargeable batteries, focusing on Lithium Polymer (LiPo) packs, that incorporate a battery management system for increased reliability. The project also explores optimization methods for fleets of UAV systems, to increase charging efficiency and extend battery lifespans. Each component of this project was first designed and tested in computer simulation. Following positive feedback and results, prototypes for each part of this system were developed and rigorously tested. Results show that the contact charging method is able to charge LiPo batteries at a 1-C rate, which is the industry standard rate, maintaining the same safety and efficiency standards as modern day direct connection chargers. Control software for these base stations was also created, to be integrated with a fleet management system, and optimizes UAV charge levels and distribution to extend LiPo battery lifetimes while still meeting expected mission demand. Each component of this project (hardware/software) was designed for manufacturing and implementation using industry standard tools, making it ideal for large-scale implementations. This system has been successfully tested with a fleet of UAV systems at Arizona State University, and is currently being integrated into an Arizona smart city environment for deployment.
Date Created
2018
Agent

Fresh15

133291-Thumbnail Image.png
Description
Fresh15 is an iOS application geared towards helping college students eat healthier. This is based on a user's preferences of price range, food restrictions, and favorite ingredients. Our application also considers the fact that students may have to order their ingredients online since they don't have access to transportation.
Date Created
2018-05
Agent

Development of a Wearable Haptic Feedback System for Use in Lower-Limb Prosthetics: Proof of Concept and Verification

133398-Thumbnail Image.png
Description
Skin and muscle receptors in the leg and foot provide able-bodied humans with force and position information that is crucial for balance and movement control. In lower-limb amputees however, this vital information is either missing or incomplete. Amputees typically compensate

Skin and muscle receptors in the leg and foot provide able-bodied humans with force and position information that is crucial for balance and movement control. In lower-limb amputees however, this vital information is either missing or incomplete. Amputees typically compensate for the loss of sensory information by relying on haptic feedback from the stump-socket interface. Unfortunately, this is not an adequate substitute. Areas of the stump that directly interface with the socket are also prone to painful irritation, which further degrades haptic feedback. The lack of somatosensory feedback from prosthetic legs causes several problems for lower-limb amputees. Previous studies have established that the lack of adequate sensory feedback from prosthetic limbs contributes to poor balance and abnormal gait kinematics. These improper gait kinematics can, in turn, lead to the development of musculoskeletal diseases. Finally, the absence of sensory information has been shown to lead to steeper learning curves and increased rehabilitation times, which hampers amputees from recovering from the trauma. In this study, a novel haptic feedback system for lower-limb amputees was develped, and studies were performed to verify that information presented was sufficiently accurate and precise in comparison to a Bertec 4060-NC force plate. The prototype device consisted of a sensorized insole, a belt-mounted microcontroller, and a linear array of four vibrotactile motors worn on the thigh. The prototype worked by calculating the center of pressure in the anteroposterior plane, and applying a time-discrete vibrotactile stimulus based on the location of the center of pressure.
Date Created
2018-05
Agent

Noninvasive and Accurate Fine Motor Rehabilitation Through a Rhythm Based Game Using a Leap Motion Controller: Usability Evaluation of Leap Motion Game

133624-Thumbnail Image.png
Description
This paper presents a system to deliver automated, noninvasive, and effective fine motor rehabilitation through a rhythm-based game using a Leap Motion Controller. The system is a rhythm game where hand gestures are used as input and must match the

This paper presents a system to deliver automated, noninvasive, and effective fine motor rehabilitation through a rhythm-based game using a Leap Motion Controller. The system is a rhythm game where hand gestures are used as input and must match the rhythm and gestures shown on screen, thus allowing a physical therapist to represent an exercise session involving the user's hand and finger joints as a series of patterns. Fine motor rehabilitation plays an important role in the recovery and improvement of the effects of stroke, Parkinson's disease, multiple sclerosis, and more. Individuals with these conditions possess a wide range of impairment in terms of fine motor movement. The serious game developed takes this into account and is designed to work with individuals with different levels of impairment. In a pilot study, under partnership with South West Advanced Neurological Rehabilitation (SWAN Rehab) in Phoenix, Arizona, we compared the performance of individuals with fine motor impairment to individuals without this impairment to determine whether a human-centered approach and adapting to an user's range of motion can allow an individual with fine motor impairment to perform at a similar level as a non-impaired user.
Date Created
2018-05
Agent

Fused Filament Fabrication of Prosthetic Components for Trans-Humeral Upper Limb Prosthetics

155887-Thumbnail Image.png
Description
Presented below is the design and fabrication of prosthetic components consisting of an attachment, tactile sensing, and actuator systems with Fused Filament Fabrication (FFF) technique. The attachment system is a thermoplastic osseointegrated upper limb prosthesis for average adult trans-humeral amputation

Presented below is the design and fabrication of prosthetic components consisting of an attachment, tactile sensing, and actuator systems with Fused Filament Fabrication (FFF) technique. The attachment system is a thermoplastic osseointegrated upper limb prosthesis for average adult trans-humeral amputation with mechanical properties greater than upper limb skeletal bone. The prosthetic designed has: a one-step surgical process, large cavities for bone tissue ingrowth, uses a material that has an elastic modulus less than skeletal bone, and can be fabricated on one system.

FFF osseointegration screw is an improvement upon the current two-part osseointegrated prosthetics that are composed of a fixture and abutment. The current prosthetic design requires two invasive surgeries for implantation and are made of titanium, which has an elastic modulus greater than bone. An elastic modulus greater than bone causes stress shielding and overtime can cause loosening of the prosthetic.

The tactile sensor is a thermoplastic piezo-resistive sensor for daily activities for a prosthetic’s feedback system. The tactile sensor is manufactured from a low elastic modulus composite comprising of a compressible thermoplastic elastomer and conductive carbon. Carbon is in graphite form and added in high filler ratios. The printed sensors were compared to sensors that were fabricated in a gravity mold to highlight the difference in FFF sensors to molded sensors. The 3D printed tactile sensor has a thickness and feel similar to human skin, has a simple fabrication technique, can detect forces needed for daily activities, and can be manufactured in to user specific geometries.

Lastly, a biomimicking skeletal muscle actuator for prosthetics was developed. The actuator developed is manufactured with Fuse Filament Fabrication using a shape memory polymer composite that has non-linear contractile and passive forces, contractile forces and strains comparable to mammalian skeletal muscle, reaction time under one second, low operating temperature, and has a low mass, volume, and material costs. The actuator improves upon current prosthetic actuators that provide rigid, linear force with high weight, cost, and noise.
Date Created
2017
Agent

Convolutional Neural Networks for Facial Expression Recognition

135660-Thumbnail Image.png
Description
This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using

This paper presents work that was done to create a system capable of facial expression recognition (FER) using deep convolutional neural networks (CNNs) and test multiple configurations and methods. CNNs are able to extract powerful information about an image using multiple layers of generic feature detectors. The extracted information can be used to understand the image better through recognizing different features present within the image. Deep CNNs, however, require training sets that can be larger than a million pictures in order to fine tune their feature detectors. For the case of facial expression datasets, none of these large datasets are available. Due to this limited availability of data required to train a new CNN, the idea of using naïve domain adaptation is explored. Instead of creating and using a new CNN trained specifically to extract features related to FER, a previously trained CNN originally trained for another computer vision task is used. Work for this research involved creating a system that can run a CNN, can extract feature vectors from the CNN, and can classify these extracted features. Once this system was built, different aspects of the system were tested and tuned. These aspects include the pre-trained CNN that was used, the layer from which features were extracted, normalization used on input images, and training data for the classifier. Once properly tuned, the created system returned results more accurate than previous attempts on facial expression recognition. Based on these positive results, naïve domain adaptation is shown to successfully leverage advantages of deep CNNs for facial expression recognition.
Date Created
2016-05
Agent

EMG-Interfaced Device for the Detection and Alleviation of Freezing of Gait in Individuals with Parkinson's Disease

135386-Thumbnail Image.png
Description
Parkinson's disease is a neurodegenerative disorder in the central nervous system that affects a host of daily activities and involves a variety of symptoms; these include tremors, slurred speech, and rigid muscles. It is the second most common movement disorder

Parkinson's disease is a neurodegenerative disorder in the central nervous system that affects a host of daily activities and involves a variety of symptoms; these include tremors, slurred speech, and rigid muscles. It is the second most common movement disorder globally. In Stage 3 of Parkinson's, afflicted individuals begin to develop an abnormal gait pattern known as freezing of gait (FoG), which is characterized by decreased step length, shuffling, and eventually complete loss of movement; they are unable to move, and often results in a fall. Surface electromyography (sEMG) is a diagnostic tool to measure electrical activity in the muscles to assess overall muscle function. Most conventional EMG systems, however, are bulky, tethered to a single location, expensive, and primarily used in a lab or clinical setting. This project explores an affordable, open-source, and portable platform called Open Brain-Computer Interface (OpenBCI). The purpose of the proposed device is to detect gait patterns by leveraging the surface electromyography (EMG) signals from the OpenBCI and to help a patient overcome an episode using haptic feedback mechanisms. Previously designed devices with similar intended purposes utilize accelerometry as a method of detection as well as audio and visual feedback mechanisms in their design.
Date Created
2016-05
Agent

The Dyadic Interaction Assistant for Individuals with Visual Impairments

137492-Thumbnail Image.png
Description
This paper presents an overview of The Dyadic Interaction Assistant for Individuals with Visual Impairments with a focus on the software component. The system is designed to communicate facial information (facial Action Units, facial expressions, and facial features) to an

This paper presents an overview of The Dyadic Interaction Assistant for Individuals with Visual Impairments with a focus on the software component. The system is designed to communicate facial information (facial Action Units, facial expressions, and facial features) to an individual with visual impairments in a dyadic interaction between two people sitting across from each other. Comprised of (1) a webcam, (2) software, and (3) a haptic device, the system can also be described as a series of input, processing, and output stages, respectively. The processing stage of the system builds on the open source FaceTracker software and the application Computer Expression Recognition Toolbox (CERT). While these two sources provide the facial data, the program developed through the IDE Qt Creator and several AppleScripts are used to adapt the information to a Graphical User Interface (GUI) and output the data to a comma-separated values (CSV) file. It is the first software to convey all 3 types of facial information at once in real-time. Future work includes testing and evaluating the quality of the software with human subjects (both sighted and blind/low vision), integrating the haptic device to complete the system, and evaluating the entire system with human subjects (sighted and blind/low vision).
Date Created
2013-05
Agent

Exploring the Design of Vibrotactile Cues for Visio-Haptic Sensory Substitution

136785-Thumbnail Image.png
Description
This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions.

This paper presents the design and evaluation of a haptic interface for augmenting human-human interpersonal interactions by delivering facial expressions of an interaction partner to an individual who is blind using a visual-to-tactile mapping of facial action units and emotions. Pancake shaftless vibration motors are mounted on the back of a chair to provide vibrotactile stimulation in the context of a dyadic (one-on-one) interaction across a table. This work explores the design of spatiotemporal vibration patterns that can be used to convey the basic building blocks of facial movements according to the Facial Action Unit Coding System. A behavioral study was conducted to explore the factors that influence the naturalness of conveying affect using vibrotactile cues.
Date Created
2014-05
Agent