Augmented Reality (AR) has progressively demonstrated its helpfulness for novicesto learn highly complex and abstract concepts by visualizing details in an immersive
environment. However, some studies show that similar results could also be obtained
in environments that do not involve AR. To…
Augmented Reality (AR) has progressively demonstrated its helpfulness for novicesto learn highly complex and abstract concepts by visualizing details in an immersive
environment. However, some studies show that similar results could also be obtained
in environments that do not involve AR. To explore the potential of AR in advancing
transformative engagement in education, I propose modeling facial expressions
as implicit feedback when one is being immersed in the environment. I developed a
Unity application to record and log the users' application operations and facial images.
A neural network-based model, Visual Geometry Group 19 (VGG19, Simonyan
and Zisserman (2014)), is adopted to recognize emotions from the captured facial
images. A within-subject user study was designed and conducted to assess the sentiment
and user engagement differences in AR and non-AR tasks. To analyze the
collected data, Dynamic Time Warping (DTW) was applied to identify the emotional
similarities between AR and non-AR environments. The results indicate that users
showed an increase in emotion patterns and application operations throughout the
AR tasks in comparison to non-AR tasks. The emotion patterns observed in the
analysis show that non-AR provides less implicit feedback compared to AR tasks.
The DTW analysis reveals that users' emotion change patterns appear to be more
distant from neutral emotions in AR than non-AR tasks. Succinctly put, the users
in the AR task demonstrated more active use of the application and yielded ranges
of emotions while operating it.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey…
In this Barrett Honors Thesis, I developed a model to quantify the complexity of Sankey diagrams, which are a type of visualization technique that shows flow between groups. To do this, I created a carefully controlled dataset of synthetic Sankey diagrams of varying sizes as study stimuli. Then, a pair of online crowdsourced user studies were conducted and analyzed. User performance for Sankey diagrams of varying size and features (number of groups, number of timesteps, and number of flow crossings) were algorithmically modeled as a formula to quantify the complexity of these diagrams. Model accuracy was measured based on the performance of users in the second crowdsourced study. The results of my experiment conclusively demonstrates that the algorithmic complexity formula I created closely models the visual complexity of the Sankey Diagrams in the dataset.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
While significant qualitative, user study-focused research has been done on augmented reality, relatively few studies have been conducted on multiple, co-located synchronously collaborating users in augmented reality. Recognizing the need for more collaborative user studies in augmented reality and the…
While significant qualitative, user study-focused research has been done on augmented reality, relatively few studies have been conducted on multiple, co-located synchronously collaborating users in augmented reality. Recognizing the need for more collaborative user studies in augmented reality and the value such studies present, a user study is conducted of collaborative decision-making in augmented reality to investigate the following research question: “Does presenting data visualizations in augmented reality influence the collaborative decision-making behaviors of a team?” This user study evaluates how viewing data visualizations with augmented reality headsets impacts collaboration in small teams compared to viewing together on a single 2D desktop monitor as a baseline. Teams of two participants performed closed and open-ended evaluation tasks to collaboratively analyze data visualized in both augmented reality and on a desktop monitor. Multiple means of collecting and analyzing data were employed to develop a well-rounded context for results and conclusions, including software logging of participant interactions, qualitative analysis of video recordings of participant sessions, and pre- and post-study participant questionnaires. The results indicate that augmented reality doesn’t significantly change the quantity of team member communication but does impact the means and strategies participants use to collaborate.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
This thesis serves as a baseline for the potential for prediction through machine learning (ML) in baseball. Hopefully, it also will serve as motivation for future work to expand and reach the potential of sabermetrics, advanced Statcast data and machine…
This thesis serves as a baseline for the potential for prediction through machine learning (ML) in baseball. Hopefully, it also will serve as motivation for future work to expand and reach the potential of sabermetrics, advanced Statcast data and machine learning. The problem this thesis attempts to solve is predicting the outcome of a pitch. Given proper pitch data and situational data, is it possible to predict the result or outcome of a pitch? The result or outcome refers to the specific outcome of a pitch, beyond ball or strike, but if the hitter puts the ball in play for a double, this thesis shows how I attempted to predict that type of outcome. Before diving into my methods, I take a deep look into sabermetrics, advanced statistics and the history of the two in Major League Baseball. After this, I describe my implemented machine learning experiment. First, I found a dataset that is suitable for training a pitch prediction model, I then analyzed the features and used some feature engineering to select a set of 16 features, and finally, I trained and tested a pair of ML models on the data. I used a decision tree classifier and random forest classifier to test the data. I attempted to us a long short-term memory to improve my score, but came up short. Each classifier performed at around 60% accuracy. I also experimented using a neural network approach with a long short-term memory (LSTM) model, but this approach requires more feature engineering to beat the simpler classifiers. In this thesis, I show examples of five hitters that I test the models on and the accuracy for each hitter. This work shows promise that advanced classification models (likely requiring more feature engineering) can provide even better prediction outcomes, perhaps with 70% accuracy or higher! There is much potential for future work and to improve on this thesis, mainly through the proper construction of a neural network, more in-depth feature analysis/selection/extraction, and data visualization.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)