Effects of Virtual Reality on Memory Rehabilitation

131208-Thumbnail Image.png
Description
In this project, I investigated the impact of virtual reality on memory retention. The investigative approach to see the impact of virtual reality on memory retention, I utilized the memorization technique called the memory palace in a virtual reality environment.

In this project, I investigated the impact of virtual reality on memory retention. The investigative approach to see the impact of virtual reality on memory retention, I utilized the memorization technique called the memory palace in a virtual reality environment. For the experiment, due to Covid-19, I was forced to be the only subject. To get effective data, I tested myself within randomly generated environments with a completely unique set of objects, both outside of a virtual reality environment and within one. First I conducted a set of 10 tests on myself by going through a virtual environment on my laptop and recalling as many objects I could within that environment. I recorded the accuracy of my own recollection as well as how long it took me to get through the data. Next I conducted a set of 10 tests on myself by going through the same virtual environment, but this time with an immersive virtual reality(VR) headset and a completely new set of objects. At the start of the project it was hypothesized that virtual reality would result in a higher memory retention rate versus simply going through the environment in a non-immersive environment. In the end, the results, albeit with a low test rate, leaned more toward showing the hypothesis to be true rather than not.
Date Created
2020-05
Agent

Facial Expression Recognition Using Machine Learning

131212-Thumbnail Image.png
Description
In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models

In recent years, the development of new Machine Learning models has allowed for new technological advancements to be introduced for practical use across the world. Multiple studies and experiments have been conducted to create new variations of Machine Learning models with different algorithms to determine if potential systems would prove to be successful. Even today, there are still many research initiatives that are continuing to develop new models in the hopes to discover potential solutions for problems such as autonomous driving or determining the emotional value from a single sentence. One of the current popular research topics for Machine Learning is the development of Facial Expression Recognition systems. These Machine Learning models focus on classifying images of human faces that are expressing different emotions through facial expressions. In order to develop effective models to perform Facial Expression Recognition, researchers have gone on to utilize Deep Learning models, which are a more advanced implementation of Machine Learning models, known as Neural Networks. More specifically, the use of Convolutional Neural Networks has proven to be the most effective models for achieving highly accurate results at classifying images of various facial expressions. Convolutional Neural Networks are Deep Learning models that are capable of processing visual data, such as images and videos, and can be used to identify various facial expressions. The purpose of this project, I focused on learning about the important concepts of Machine Learning, Deep Learning, and Convolutional Neural Networks to implement a Convolutional Neural Network that was previously developed by a recommended research paper.
Date Created
2020-05
Agent

Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

157788-Thumbnail Image.png
Description
Parents fulfill a pivotal role in early childhood development of social and communication

skills. In children with autism, the development of these skills can be delayed. Applied

behavioral analysis (ABA) techniques have been created to aid in skill acquisition.

Among these, pivotal response

Parents fulfill a pivotal role in early childhood development of social and communication

skills. In children with autism, the development of these skills can be delayed. Applied

behavioral analysis (ABA) techniques have been created to aid in skill acquisition.

Among these, pivotal response treatment (PRT) has been empirically shown to foster

improvements. Research into PRT implementation has also shown that parents can be

trained to be effective interventionists for their children. The current difficulty in PRT

training is how to disseminate training to parents who need it, and how to support and

motivate practitioners after training.

Evaluation of the parents’ fidelity to implementation is often undertaken using video

probes that depict the dyadic interaction occurring between the parent and the child during

PRT sessions. These videos are time consuming for clinicians to process, and often result

in only minimal feedback for the parents. Current trends in technology could be utilized to

alleviate the manual cost of extracting data from the videos, affording greater

opportunities for providing clinician created feedback as well as automated assessments.

The naturalistic context of the video probes along with the dependence on ubiquitous

recording devices creates a difficult scenario for classification tasks. The domain of the

PRT video probes can be expected to have high levels of both aleatory and epistemic

uncertainty. Addressing these challenges requires examination of the multimodal data

along with implementation and evaluation of classification algorithms. This is explored

through the use of a new dataset of PRT videos.

The relationship between the parent and the clinician is important. The clinician can

provide support and help build self-efficacy in addition to providing knowledge and

modeling of treatment procedures. Facilitating this relationship along with automated

feedback not only provides the opportunity to present expert feedback to the parent, but

also allows the clinician to aid in personalizing the classification models. By utilizing a

human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the

classification models by providing additional labeled samples. This will allow the system

to improve classification and provides a person-centered approach to extracting

multimodal data from PRT video probes.
Date Created
2019
Agent

Developing a Node Graph Tool: Pattern Recognition Through Sound

131996-Thumbnail Image.png
Description
Although many data visualization diagrams can be made accessible for individuals who are blind or visually impaired, they often do not present the information in a way that intuitively allows readers to easily discern patterns in the data. In particular,

Although many data visualization diagrams can be made accessible for individuals who are blind or visually impaired, they often do not present the information in a way that intuitively allows readers to easily discern patterns in the data. In particular, accessible node graphs tend to use speech to describe the transitions between nodes. While the speech is easy to understand, readers can be overwhelmed by too much speech and may not be able to discern any structural patterns which occur in the graphs. Considering these limitations, this research seeks to find ways to better present transitions in node graphs.

This study aims to gain knowledge on how sequence patterns in node graphs can be perceived through speech and nonspeech audio. Users listened to short audio clips describing a sequence of transitions occurring in a node graph. User study results were evaluated based on accuracy and user feedback. Five common techniques were identified through the study, and the results will be used to help design a node graph tool to improve accessibility of node graph creation and exploration for individuals that are blind or visually impaired.
Date Created
2019-12
Agent

HapBack - Providing Spatial Awareness at a Distance Using Haptic Stimulation

132065-Thumbnail Image.png
Description
This paper presents a study done to gain knowledge on the communication of an object’s relative 3-dimensional position in relation to individuals who are visually impaired and blind. The HapBack, a continuation of the HaptWrap V1.0 (Duarte et al., 2018),

This paper presents a study done to gain knowledge on the communication of an object’s relative 3-dimensional position in relation to individuals who are visually impaired and blind. The HapBack, a continuation of the HaptWrap V1.0 (Duarte et al., 2018), focused on the perception of objects and their distances in 3-dimensional space using haptic communication. The HapBack is a device that consists of two elastic bands wrapped horizontally secured around the user’s torso and two backpack straps secured along the user’s back. The backpack straps are embedded with 10 vibrotactile motors evenly positioned along the spine. This device is designed to provide a wearable interface for blind and visually impaired individuals in order to understand how the position of objects in 3-dimensional space are perceived through haptic communication. We were able to analyze the accuracy of the HapBack device through three vectors (1) Two different modes of vibration – absolute and relative (2) the location of the vibrotactile motors when in absolute mode (3) and the location of the vibrotactile motors when in relative mode. The results provided support that the HapBack provided vibrotactile patterns that were intuitively mapped to distances represented in the study. We were able to gain a better understanding on how distance can be perceived through haptic communication in individuals who are blind through analyzing the intuitiveness of the vibro-tactile patterns and the accuracy of the user’s responses.
Date Created
2019-12
Agent

Deep domain fusion for adaptive image classification

157758-Thumbnail Image.png
Description
Endowing machines with the ability to understand digital images is a critical task for a host of high-impact applications, including pathology detection in radiographic imaging, autonomous vehicles, and assistive technology for the visually impaired. Computer vision systems rely on large

Endowing machines with the ability to understand digital images is a critical task for a host of high-impact applications, including pathology detection in radiographic imaging, autonomous vehicles, and assistive technology for the visually impaired. Computer vision systems rely on large corpora of annotated data in order to train task-specific visual recognition models. Despite significant advances made over the past decade, the fact remains collecting and annotating the data needed to successfully train a model is a prohibitively expensive endeavor. Moreover, these models are prone to rapid performance degradation when applied to data sampled from a different domain. Recent works in the development of deep adaptation networks seek to overcome these challenges by facilitating transfer learning between source and target domains. In parallel, the unification of dominant semi-supervised learning techniques has illustrated unprecedented potential for utilizing unlabeled data to train classification models in defiance of discouragingly meager sets of annotated data.

In this thesis, a novel domain adaptation algorithm -- Domain Adaptive Fusion (DAF) -- is proposed, which encourages a domain-invariant linear relationship between the pixel-space of different domains and the prediction-space while being trained under a domain adversarial signal. The thoughtful combination of key components in unsupervised domain adaptation and semi-supervised learning enable DAF to effectively bridge the gap between source and target domains. Experiments performed on computer vision benchmark datasets for domain adaptation endorse the efficacy of this hybrid approach, outperforming all of the baseline architectures on most of the transfer tasks.
Date Created
2019
Agent

Resonant microbeam high resolution vibrotactile haptic display

157187-Thumbnail Image.png
Description
One type of assistive device for the blind has attempted to convert visual information into information that can be perceived through another sense, such as touch or hearing. A vibrotactile haptic display assistive device consists of an array of vibrating

One type of assistive device for the blind has attempted to convert visual information into information that can be perceived through another sense, such as touch or hearing. A vibrotactile haptic display assistive device consists of an array of vibrating elements placed against the skin, allowing the blind individual to receive visual information through touch. However, these approaches have two significant technical challenges: large vibration element size and the number of microcontroller pins required for vibration control, both causing excessively low resolution of the device. Here, I propose and investigate a type of high-resolution vibrotactile haptic display which overcomes these challenges by utilizing a ‘microbeam’ as the vibrating element. These microbeams can then be actuated using only one microcontroller pin connected to a speaker or surface transducer. This approach could solve the low-resolution problem currently present in all haptic displays. In this paper, the results of an investigation into the manufacturability of such a device, simulation of the vibrational characteristics, and prototyping and experimental validation of the device concept are presented. The possible reasons of the frequency shift between the result of the forced or free response of beams and the frequency calculated based on a lumped mass approximation are investigated. It is found that one of the important reasons for the frequency shift is the size effect, the dependency of the elastic modulus on the size and kind of material. This size effect on A2 tool steel for Micro-Meso scale cantilever beams for the proposed system is investigated.
Date Created
2019
Agent

Improving Patient Medical Awareness through an Android-based Assistant

Description
As modern advancements in medical technology continue to increase overall life expectancy, hospitals and healthcare systems are finding new and more efficient ways of storing extensive amounts of patient healthcare information. This progression finds people increasingly dependent on hospitals as

As modern advancements in medical technology continue to increase overall life expectancy, hospitals and healthcare systems are finding new and more efficient ways of storing extensive amounts of patient healthcare information. This progression finds people increasingly dependent on hospitals as the primary providers of medical data, ranging from immunization records to surgical history. However, the benefits of carrying a copy of personal health information are becoming increasingly evident. This project aims to create a simple, secure, and cohesive application that stores and retrieves user health information backed by Google’s Firebase cloud infrastructure. Data was collected to both explore the current need for such an application, and to test the usability of the product. The former was done using a multiple-choice survey distributed through social media to understand the necessity for a patient-held health file (PHF). Subsequently, user testing was performed with the intent to track the success of our application in meeting those needs. According to the data, there was a trend that suggested a significant need for a healthcare information storage device. This application, allowing for efficient and simple medical information storage and retrieval, was created for a target audience of those seeking to improve their medical information awareness, with a primary focus on the elderly population. Specific correlations between the frequency of physician visits and app usage were identified to target the potential use cases of our app. The outcome of this project succeeded in meeting the significant need for increased patient medical awareness in the healthcare community.
Date Created
2019-05
Agent

MisophoniAPP: A Website for Treating Misophonia

133018-Thumbnail Image.png
Description
This paper introduces MisophoniAPP, a new website for managing misophonia. It will briefly discuss the nature of this chronic syndrome, which is the experience of reacting strongly to certain everyday sounds, or “triggers”. Various forms of Cognitive Behavioral Therapy and

This paper introduces MisophoniAPP, a new website for managing misophonia. It will briefly discuss the nature of this chronic syndrome, which is the experience of reacting strongly to certain everyday sounds, or “triggers”. Various forms of Cognitive Behavioral Therapy and the Neural Repatterning Technique are currently used to treat misophonia, but they are not guaranteed to work for every patient. Few apps exist to help patients with their therapy, so this paper describes the design and creation of a new website that combines exposure therapy,
relaxation, and gamification to help patients alleviate their misophonic reflexes.
Date Created
2019-05
Agent

Web Application for Sorority Involvement Tracking

133186-Thumbnail Image.png
Description
Most collegiate organizations aim to unite students with common interests and engage them in a like-minded community of peers. A significant sub-group of these organizations are classified under sororities and fraternities and commonly known as Greek Life. Member involvement is

Most collegiate organizations aim to unite students with common interests and engage them in a like-minded community of peers. A significant sub-group of these organizations are classified under sororities and fraternities and commonly known as Greek Life. Member involvement is a crucial element for Greek Life as participation in philanthropic events, chapter meetings, rituals, recruitment events, etc. often reflects the state of the organization. The purpose of this project is to create a web application that allows members of an Arizona State University sorority to view their involvement activity as outlined by the chapter. Maintaining the balance between academics, sleep, a social life, and extra-curricular activities/organizations can be difficult for college students. With the use of this website, members can view their attendances, absences, and study/volunteer hours to know their progress towards the involvement requirements set by the chapter. This knowledge makes it easier to plan schedules and alleviate some stress associated with the time-management of sorority events, assignments/homework, and studying. It is also designed for the sorority leadership to analyze and track the participation of the membership. Members can submit their participation in events, making the need for manual counting and calculations disappear. The website administrator(s) can view and approve data from any and all members. The website was developed using HTML, CSS, and JavaScript in conjunction with Firebase for the back-end database. Human-Computer Interaction (HCI) tools and techniques were used throughout the development process to aide in prototyping, visual design, and evaluation. The front-end appearance of the website was designed to mimic the data manipulation used in the current involvement tracking system while presenting it in a more personalized and aesthetically pleasing manner.
Date Created
2018-12
Agent