Understanding the Effect of Animation and its Speed on User Enjoyment

157435-Thumbnail Image.png
Description
Providing the user with good user experience is complex and involves multiple factors. One of the factors that can impact the user experience is animation. Animation can be tricky to get right and needs to be understood by designers. Animations

Providing the user with good user experience is complex and involves multiple factors. One of the factors that can impact the user experience is animation. Animation can be tricky to get right and needs to be understood by designers. Animations that are too fast might not accomplish anything and having them too slow could slow the user down causing them to get frustrated.

This study explores the subject of animation and its speed by trying to answer the following questions – 1) Do people notice whether an animation is present 2) Does animation affect the enjoyment of a transition? and 3) If animation does affect enjoyment, what is the effect of different animation speeds?

The study was conducted using 3 prototypes of an application to order bottled water in which the transitions between different brands of bottled water were animated at 0ms, 300ms and 650ms. A survey was conducted to see if the participants were able to spot any difference between the prototypes and if they did, which one they preferred.

It was found that most people did not recognize any difference between the prototypes. Even people who recognized a difference between the prototypes did not have any preference of speed.
Date Created
2019
Agent

Understanding Humans to Better Understand Robots in a Joint-Task Environment: The Study of Surprise and Trust in Human-Machine Physical Coordination

157421-Thumbnail Image.png
Description
Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the

Human-robot interaction has expanded immensely within dynamic environments. The goals of human-robot interaction are to increase productivity, efficiency and safety. In order for the integration of human-robot interaction to be seamless and effective humans must be willing to trust the capabilities of assistive robots. A major priority for human-robot interaction should be to understand how human dyads have been historically effective within a joint-task setting. This will ensure that all goals can be met in human robot settings. The aim of the present study was to examine human dyads and the effects of an unexpected interruption. Humans’ interpersonal and individual levels of trust were studied in order to draw appropriate conclusions. Seventeen undergraduate and graduate level dyads were collected from Arizona State University. Participants were broken up into either a surprise condition or a baseline condition. Participants individually took two surveys in order to have an accurate understanding of levels of dispositional and individual levels of trust. The findings showed that participant levels of interpersonal trust were average. Surprisingly, participants who participated in the surprise condition afterwards, showed moderate to high levels of dyad trust. This effect showed that participants became more reliant on their partners when interrupted by a surprising event. Future studies will take this knowledge and apply it to human-robot interaction, in order to mimic the seamless team-interaction shown in historically effective dyads, specifically human team interaction.
Date Created
2019
Agent

The Effects of Confirmation Bias and Susceptibility to Deception on an Individual’s Choice to Share Information

157402-Thumbnail Image.png
Description
As deception in cyberspace becomes more dynamic, research in this area should also take a dynamic approach to battling deception and false information. Research has previously shown that people are no better than chance at detecting deception. Deceptive information in

As deception in cyberspace becomes more dynamic, research in this area should also take a dynamic approach to battling deception and false information. Research has previously shown that people are no better than chance at detecting deception. Deceptive information in cyberspace, specifically on social media, is not exempt from this pitfall. Current practices in social media rely on the users to detect false information and use appropriate discretion when deciding to share information online. This is ineffective and will predicatively end with users being unable to discern true from false information at all, as deceptive information becomes more difficult to distinguish from true information. To proactively combat inaccurate and deceptive information on social media, research must be conducted to understand not only the interaction effects of false content and user characteristics, but user behavior that stems from this interaction as well. This study investigated the effects of confirmation bias and susceptibility to deception on an individual’s choice to share information, specifically to understand how these factors relate to the sharing of false controversial information.
Date Created
2019
Agent

Communicating intent in autonomous vehicles

157345-Thumbnail Image.png
Description
The prospects of commercially available autonomous vehicles are surely tantalizing, however the implementation of these vehicles and their strain on the social dynamics between motorists and pedestrians remains unknown. Questions concerning how autonomous vehicles will communicate safety and intent to

The prospects of commercially available autonomous vehicles are surely tantalizing, however the implementation of these vehicles and their strain on the social dynamics between motorists and pedestrians remains unknown. Questions concerning how autonomous vehicles will communicate safety and intent to pedestrians remain largely unanswered. This study examines the efficacy of various proposed technologies for bridging the communication gap between self-driving cars and pedestrians. Displays utilizing words like “safe” and “danger” seem to be effective in communicating with pedestrians and other road users. Future research should attempt to study different external notification interfaces in real-life settings to more accurately gauge pedestrian responses.
Date Created
2019
Agent

Decision Support for Crew Scheduling using Automated Planning

157313-Thumbnail Image.png
Description
Allocating tasks for a day's or week's schedule is known to be a challenging and difficult problem. The problem intensifies by many folds in multi-agent settings. A planner or group of planners who decide such kind of task association schedule

Allocating tasks for a day's or week's schedule is known to be a challenging and difficult problem. The problem intensifies by many folds in multi-agent settings. A planner or group of planners who decide such kind of task association schedule must have a comprehensive perspective on (1) the entire array of tasks to be scheduled (2) idea on constraints like importance cum order of tasks and (3) the individual abilities of the operators. One example of such kind of scheduling is the crew scheduling done for astronauts who will spend time at International Space Station (ISS). The schedule for the crew of ISS is decided before the mission starts. Human planners take part in the decision-making process to determine the timing of activities for multiple days for multiple crew members at ISS. Given the unpredictability of individual assignments and limitations identified with the various operators, deciding upon a satisfactory timetable is a challenging task. The objective of the current work is to develop an automated decision assistant that would assist human planners in coming up with an acceptable task schedule for the crew. At the same time, the decision assistant will also ensure that human planners are always in the driver's seat throughout this process of decision-making.

The decision assistant will make use of automated planning technology to assist human planners. The guidelines of Naturalistic Decision Making (NDM) and the Human-In-The -Loop decision making were followed to make sure that the human is always in the driver's seat. The use cases considered are standard situations which come up during decision-making in crew-scheduling. The effectiveness of automated decision assistance was evaluated by setting it up for domain experts on a comparable domain of scheduling courses for master students. The results of the user study evaluating the effectiveness of automated decision support were subsequently published.
Date Created
2019
Agent

Effects of cell phone notification levels on driver performance

157284-Thumbnail Image.png
Description
Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with

Previous literature was reviewed in an effort to further investigate the link between notification levels of a cell phone and their effects on driver distraction. Mind-wandering has been suggested as an explanation for distraction and has been previously operationalized with oculomotor movement. Mind-wandering’s definition is debated, but in this research it was defined as off task thoughts that occur due to the task not requiring full cognitive capacity. Drivers were asked to operate a driving simulator and follow audio turn by turn directions while experiencing each of three cell phone notification levels: Control (no texts), Airplane (texts with no notifications), and Ringer (audio notifications). Measures of Brake Reaction Time, Headway Variability, and Average Speed were used to operationalize driver distraction. Drivers experienced higher Brake Reaction Time and Headway Variability with a lower Average Speed in both experimental conditions when compared to the Control Condition. This is consistent with previous research in the field of implying a distracted state. Oculomotor movement was measured as the percent time the participant was looking at the road. There was no significant difference between the conditions in this measure. The results of this research indicate that not, while not interacting with a cell phone, no audio notification is required to induce a state of distraction. This phenomenon was unable to be linked to mind-wandering.
Date Created
2019
Agent

Attribution biases and trust development in physical human-machine coordination: blaming yourself, your partner or an unexpected event

157253-Thumbnail Image.png
Description
Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These

Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including machine partners. The increasing capabilities and complexity of machines allow them to work physically with humans. However, their improvements may interfere with the accuracy for people to calibrate trust in machines and their capabilities, which requires an understanding of attribution biases’ effect on human-machine coordination. Specifically, the current thesis explores how the development of trust in a partner is influenced by attribution biases and people’s assignment of blame for a negative outcome. This study can also suggest how a machine partner should be designed to react to environmental disturbances and report the appropriate level of information about external conditions.
Date Created
2019
Agent

Training in a modern age

157150-Thumbnail Image.png
Description
This study was undertaken to ascertain to what degree, if any, virtual reality training was superior to monitor based training. By analyzing the results in a 2x3 ANOVA it was found that little difference in training resulted from using virtual

This study was undertaken to ascertain to what degree, if any, virtual reality training was superior to monitor based training. By analyzing the results in a 2x3 ANOVA it was found that little difference in training resulted from using virtual reality or monitor interaction to facilitate training. The data did suggest that training involving rich textured environments might be more beneficial under virtual reality conditions, however nothing significant was found in the analysis. It might be possible that significance could be obtained by comparing a virtual reality set-up with higher fidelity to a monitor trial.
Date Created
2019
Agent

AI in Radiology: How the Adoption of an Accountability Framework can Impact Technology Integration in the Expert-Decision-Making Job Space

132761-Thumbnail Image.png
Description
Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure,

Rapid advancements in Artificial Intelligence (AI), Machine Learning, and Deep Learning technologies are widening the playing field for automated decision assistants in healthcare. The field of radiology offers a unique platform for this technology due to its repetitive work structure, ability to leverage large data sets, and high position for clinical and social impact. Several technologies in cancer screening, such as Computer Aided Detection (CAD), have broken the barrier of research into reality through successful outcomes with patient data (Morton, Whaley, Brandt, & Amrami, 2006; Patel et al, 2018). Technologies, such as the IBM Medical Sieve, are growing excitement with the potential for increased impact through the addition of medical record information ("Medical Sieve Radiology Grand Challenge", 2018). As the capabilities of automation increase and become a part of expert-decision-making jobs, however, the careful consideration of its integration into human systems is often overlooked. This paper aims to identify how healthcare professionals and system engineers implementing and interacting with automated decision-making aids in Radiology should take bureaucratic, legal, professional, and political accountability concerns into consideration. This Accountability Framework is modeled after Romzek and Dubnick’s (1987) public administration framework and expanded on through an analysis of literature on accountability definitions and examples in military, healthcare, and research sectors. A cohesive understanding of this framework and the human concerns it raises helps drive the questions that, if fully addressed, create the potential for a successful integration and adoption of AI in radiology and ultimately the care environment.
Date Created
2019-05
Agent

Performance Expectations of Branded Autonomous Vehicles: Measuring Brand Trust Using Pathfinder Associative Networks

157003-Thumbnail Image.png
Description
Future autonomous vehicle systems will be diverse in design and functionality since they will be produced by different brands. In the automotive industry, trustworthiness of a vehicle is closely tied to its perceived safety. Trust involves dependence on another agent

Future autonomous vehicle systems will be diverse in design and functionality since they will be produced by different brands. In the automotive industry, trustworthiness of a vehicle is closely tied to its perceived safety. Trust involves dependence on another agent in an uncertain situation. Perceptions of system safety, trustworthiness, and performance are important because they guide people’s behavior towards automation. Specifically, these perceptions impact how reliant people believe they can be on the system to do a certain task. Over or under reliance can be a concern for safety because they involve the person allocating tasks between themselves and the system in inappropriate ways. If a person trusts a brand they may also believe the brand’s technology will keep them safe. The present study measured brand trust associations and performance expectations for safety between twelve different automobile brands using an online survey.

The literature and results of the present study suggest perceived trustworthiness for safety of the automation and the brand of the automation, could together impact trust. Results revelated that brands closely related to the trust-based attributes, Confidence, Secure, Integrity, and Trustworthiness were expected to produce autonomous vehicle technology that performs in a safer way. While, brands more related to the trust-based attributes Harmful, Deceptive, Underhanded, Suspicious, Beware, and Familiar were expected to produce autonomous vehicle technology that performs in a less safe way.

These findings contribute to both the fields of Human-Automation Interaction and Consumer Psychology. Typically, brands and automation are discussed separately however, this work suggests an important relationship may exist. A deeper understanding of brand trust as it relates to autonomous vehicles can help producers understand potential for over or under reliance and create safer systems that help users calibrate trust appropriately. Considering the impact on safety, more research should be conducted to explore brand trust and expectations for performance between various brands.
Date Created
2018
Agent