Increase in the usage of Internet of Things(IoT) devices across physical systems has provided a platform for continuous data collection, real-time monitoring, and extracting useful insights. Limited computing power and constrained resources on the IoT devices has driven the physical…
Increase in the usage of Internet of Things(IoT) devices across physical systems has provided a platform for continuous data collection, real-time monitoring, and extracting useful insights. Limited computing power and constrained resources on the IoT devices has driven the physical systems to rely on external resources such as cloud computing for handling compute-intensive and data-intensive processing. Recently, physical environments have began to explore the usage of edge devices for handling complex processing. However, these environments may face many challenges suchas uncertainty of device availability, uncertainty of data relevance, and large set of geographically dispersed devices. This research proposes the design of a reliable distributed management system that focuses on the following objectives: 1. improving the success rate of task completion in uncertain environments. 2. enhancing the reliability of the applications and 3. support latency sensitive applications. Main modules of the proposed system include:
1. A novel proactive user recruitment approach to improve the success rate of the task completion.
2.Contextual data acquisition and integration of false data detection for enhancing the reliability of the applications.
3. Novel distributed management of compute resources for achieving real-time monitoring and to support highly responsive
applications.
User recruitment approaches select the devices for offloading computation. Proposed proactive user recruitment module selects an optimized set of devices that match the resource requirements of the application. Contextual data acquisition
module banks on the contextual requirements for identifying the data sources that are more useful to the application. Proposed reliable distributed management system can be used as a framework for offloading the latency sensitive applications across the volunteer computing edge devices.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Arizona State course enrollment regularly reaches triple digits. Despite the large enrollment numbers, the level of communication among students remain relatively low. Students often create Discord servers to keep in touch with classmates, but this requires each individual student to…
Arizona State course enrollment regularly reaches triple digits. Despite the large enrollment numbers, the level of communication among students remain relatively low. Students often create Discord servers to keep in touch with classmates, but this requires each individual student to track down the invite link. The purpose of this project is to create an inviting chat service for students with minimal barriers of entry. This website, https://gibbl.io, offers a chat room for every class at ASU, making it simple for students to maintain communication.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Artificial Intelligence is quickly growing to be an influential part of our daily lives. Due to this, we believe it is important to analyze how cultural perceptions can influence how we interact and develop technology. We decided to focus on…
Artificial Intelligence is quickly growing to be an influential part of our daily lives. Due to this, we believe it is important to analyze how cultural perceptions can influence how we interact and develop technology. We decided to focus on India due to its large economic stature, cultural influence, and influence on the technology industry.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Artificial Intelligence is quickly growing to be an influential part of our daily lives. Due to this, we believe it is important to analyze how cultural perceptions can influence how we interact and develop technology<br/>We decided to focus on India…
Artificial Intelligence is quickly growing to be an influential part of our daily lives. Due to this, we believe it is important to analyze how cultural perceptions can influence how we interact and develop technology<br/>We decided to focus on India due to its large economic stature, cultural influence, and influence on the technology industry.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet-of-things (IoT) medical devices is the presence of machine learning components, for which formal properties are difficult to establish. In addition, operational…
One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet-of-things (IoT) medical devices is the presence of machine learning components, for which formal properties are difficult to establish. In addition, operational components interaction circumstances, inclusion of human-in-the-loop, and environmental changes result in a myriad of safety concerns all of which may not only be comprehensibly tested before deployment but also may not even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI-enabled safety critical systems and addresses the safety problem by proposing an operational safety verification technique which relies on solving the following subproblems: 1. Given Input/Output operational traces collected from sensors/actuators, automatically learn a hybrid automata (HA) representation of the AI-enabled CPS. 2. Given the learned HA, evaluate the operational safety of AI-enabled CPS in the field. This dissertation presents novel approaches for learning hybrid automata model from time series traces collected from the operation of the AI-enabled CPS in the real world for linear and nonlinear CPS. The learned model allows operational safety to be stringently evaluated by comparing the learned HA model against a reference specifications model of the system. The proposed techniques are evaluated on the artificial pancreas control system
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided…
Languages, specially gestural and sign languages, are best learned in immersive environments with rich feedback. Computer-Aided Language Learning (CALL) solu- tions for spoken languages have successfully incorporated some feedback mechanisms, but no such solution exists for signed languages. Computer Aided Sign Language Learning (CASLL) is a recent and promising field of research which is made feasible by advances in Computer Vision and Sign Language Recognition(SLR). Leveraging existing SLR systems for feedback based learning is not feasible because their decision processes are not human interpretable and do not facilitate conceptual feedback to learners. Thus, fundamental research is needed towards designing systems that are modular and explainable. The explanations from these systems can then be used to produce feedback to aid in the learning process.
In this work, I present novel approaches for the recognition of location, movement and handshape that are components of American Sign Language (ASL) using both wrist-worn sensors as well as webcams. Finally, I present Learn2Sign(L2S), a chat- bot based AI tutor that can provide fine-grained conceptual feedback to learners of ASL using the modular recognition approaches. L2S is designed to provide feedback directly relating to the fundamental concepts of ASL using an explainable AI. I present the system performance results in terms of Precision, Recall and F-1 scores as well as validation results towards the learning outcomes of users. Both retention and execution tests for 26 participants for 14 different ASL words learned using learn2sign is presented. Finally, I also present the results of a post-usage usability survey for all the participants. In this work, I found that learners who received live feedback on their executions improved their execution as well as retention performances. The average increase in execution performance was 28% points and that for retention was 4% points.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Due to the advent of easy-to-use, portable, and cost-effective brain signal sensing devices, pervasive Brain-Machine Interface (BMI) applications using Electroencephalogram (EEG) are growing rapidly. The main objectives of these applications are: 1) pervasive collection of brain data from multiple users,…
Due to the advent of easy-to-use, portable, and cost-effective brain signal sensing devices, pervasive Brain-Machine Interface (BMI) applications using Electroencephalogram (EEG) are growing rapidly. The main objectives of these applications are: 1) pervasive collection of brain data from multiple users, 2) processing the collected data to recognize the corresponding mental states, and 3) providing real-time feedback to the end users, activating an actuator, or information harvesting by enterprises for further services. Developing BMI applications faces several challenges, such as cumbersome setup procedure, low signal-to-noise ratio, insufficient signal samples for analysis, and long processing times. Internet-of-Things (IoT) technologies provide the opportunity to solve these challenges through large scale data collection, fast data transmission, and computational offloading.
This research proposes an IoT-based framework, called BraiNet, that provides a standard design methodology for fulfilling the pervasive BMI applications requirements including: accuracy, timeliness, energy-efficiency, security, and dependability. BraiNet applies Machine Learning (ML) based solutions (e.g. classifiers and predictive models) to: 1) improve the accuracy of mental state detection on-the-go, 2) provide real-time feedback to the users, and 3) save power on mobile platforms. However, BraiNet inherits security breaches of IoT, due to applying off-the-shelf soft/hardware, high accessibility, and massive network size. ML algorithms, as the core technology for mental state recognition, are among the main targets for cyber attackers. Novel ML security solutions are proposed and added to BraiNet, which provide analytical methodologies for tuning the ML hyper-parameters to be secure against attacks.
To implement these solutions, two main optimization problems are solved: 1) maximizing accuracy, while minimizing delays and power consumption, and 2) maximizing the ML security, while keeping its accuracy high. Deep learning algorithms, delay and power models are developed to solve the former problem, while gradient-free optimization techniques, such as Bayesian optimization are applied for the latter. To test the framework, several BMI applications are implemented, such as EEG-based drivers fatigue detector (SafeDrive), EEG-based identification and authentication system (E-BIAS), and interactive movies that adapt to viewers mental states (nMovie). The results from the experiments on the implemented applications show the successful design of pervasive BMI applications based on the BraiNet framework.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Humans have an excellent ability to analyze and process information from multiple domains. They also possess the ability to apply the same decision-making process when the situation is familiar with their previous experience.
Inspired by human's ability to remember past…
Humans have an excellent ability to analyze and process information from multiple domains. They also possess the ability to apply the same decision-making process when the situation is familiar with their previous experience.
Inspired by human's ability to remember past experiences and apply the same when a similar situation occurs, the research community has attempted to augment memory with Neural Network to store the previously learned information. Together with this, the community has also developed mechanisms to perform domain-specific weight switching to handle multiple domains using a single model. Notably, the two research fields work independently, and the goal of this dissertation is to combine their capabilities.
This dissertation introduces a Neural Network module augmented with two external memories, one allowing the network to read and write the information and another to perform domain-specific weight switching. Two learning tasks are proposed in this work to investigate the model performance - solving mathematics operations sequence and action based on color sequence identification. A wide range of experiments with these two tasks verify the model's learning capabilities.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
The mobile crowdsensing (MCS) applications leverage the user data to derive useful information by data-driven evaluation of innovative user contexts and gathering of information at a high data rate. Such access to context-rich data can potentially enable computationally intensive crowd-sourcing…
The mobile crowdsensing (MCS) applications leverage the user data to derive useful information by data-driven evaluation of innovative user contexts and gathering of information at a high data rate. Such access to context-rich data can potentially enable computationally intensive crowd-sourcing applications such as tracking a missing person or capturing a highlight video of an event. Using snippets and pictures captured from multiple mobile phone cameras with specific contexts can improve the data acquired in such applications. These MCS applications require efficient processing and analysis to generate results in real time. A human user, mobile device and their interactions cause a change in context on the mobile device affecting the quality contextual data that is gathered. Usage of MCS data in real-time mobile applications is challenging due to the complex inter-relationship between: a) availability of context, context is available with the mobile phones and not with the cloud, b) cost of data transfer to remote cloud servers, both in terms of communication time and energy, and c) availability of local computational resources on the mobile phone, computation may lead to rapid battery drain or increased response time. The resource-constrained mobile devices need to offload some of their computation.
This thesis proposes ContextAiDe an end-end architecture for data-driven distributed applications aware of human mobile interactions using Edge computing. Edge processing supports real-time applications by reducing communication costs. The goal is to optimize the quality and the cost of acquiring the data using a) modeling and prediction of mobile user contexts, b) efficient strategies of scheduling application tasks on heterogeneous devices including multi-core devices such as GPU c) power-aware scheduling of virtual machine (VM) applications in cloud infrastructure e.g. elastic VMs. ContextAiDe middleware is integrated into the mobile application via Android API. The evaluation consists of overheads and costs analysis in the scenario of ``perpetrator tracking" application on the cloud, fog servers, and mobile devices. LifeMap data sets containing actual sensor data traces from mobile devices are used to simulate the application run for large scale evaluation.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Time series forecasting is the prediction of future data after analyzing the past data for temporal trends. This work investigates two fields of time series forecasting in the form of Stock Data Prediction and the Opioid Incident Prediction. In this…
Time series forecasting is the prediction of future data after analyzing the past data for temporal trends. This work investigates two fields of time series forecasting in the form of Stock Data Prediction and the Opioid Incident Prediction. In this thesis, the Stock Data Prediction Problem investigates methods which could predict the trends in the NYSE and NASDAQ stock markets for ten different companies, nine of which are part of the Dow Jones Industrial Average (DJIA). A novel deep learning model which uses a Generative Adversarial Network (GAN) is used to predict future data and the results are compared with the existing regression techniques like Linear, Huber, and Ridge regression and neural network models such as Long-Short Term Memory (LSTMs) models.
In this thesis, the Opioid Incident Prediction Problem investigates methods which could predict the location of future opioid overdose incidences using the past opioid overdose incidences data. A similar deep learning model is used to predict the location of the future overdose incidences given the two datasets of the past incidences (Connecticut and Cincinnati Opioid incidence datasets) and compared with the existing neural network models such as Convolution LSTMs, Attention-based Convolution LSTMs, and Encoder-Decoder frameworks. Experimental results on the above-mentioned datasets for both the problems show the superiority of the proposed architectures over the standard statistical models.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)