Design of a Graph Neural Network Coupled with an Advantage Actor-Critic Reinforcement Learning Algorithm for Multi-Agent Navigation

171787-Thumbnail Image.png
Description
A Graph Neural Network (GNN) is a type of neural network architecture that operates on data consisting of objects and their relationships, which are represented by a graph. Within the graph, nodes represent objects and edges represent associations between those

A Graph Neural Network (GNN) is a type of neural network architecture that operates on data consisting of objects and their relationships, which are represented by a graph. Within the graph, nodes represent objects and edges represent associations between those objects. The representation of relationships and correlations between data is unique to graph structures. GNNs exploit this feature of graphs by augmenting both forms of data, individual and relational, and have been designed to allow for communication and sharing of data within each neural network layer. These benefits allow each node to have an enriched perspective, or a better understanding, of its neighbouring nodes and its connections to those nodes. The ability of GNNs to efficiently process high-dimensional node data and multi-faceted relationships among nodes gives them advantages over neural network architectures such as Convolutional Neural Networks (CNNs) that do not implicitly handle relational data. These quintessential characteristics of GNN models make them suitable for solving problems in which the correspondences among input data are needed to produce an accurate and precise representation of these data. GNN frameworks may significantly improve existing communication and control techniques for multi-agent tasks by implicitly representing not only information associated with the individual agents, such as agent position, velocity, and camera data, but also their relationships with one another, such as distances between the agents and their ability to communicate with one another. One such task is a multi-agent navigation problem in which the agents must coordinate with one another in a decentralized manner, using proximity sensors only, to navigate safely to their intended goal positions in the environment without collisions or deadlocks. The contribution of this thesis is the design of an end-to-end decentralized control scheme for multi-agent navigation that utilizes GNNs to prevent inter-agent collisions and deadlocks. The contributions consist of the development, simulation and evaluation of the performance of an advantage actor-critic (A2C) reinforcement learning algorithm that employs actor and critic networks for training that simultaneously approximate the policy function and value function, respectively. These networks are implemented using GNN frameworks for navigation by groups of 3, 5, 10 and 15 agents in simulated two-dimensional environments. It is observed that in $40\%$ to $50\%$ of the simulation trials, between 70$\%$ to 80$\%$ of the agents reach their goal positions without colliding with other agents or becoming trapped in deadlocks. The model is also compared to a random run simulation, where actions are chosen randomly for the agents and observe that the model performs notably well for smaller groups of agents.
Date Created
2022
Agent

Exploring Positive Feedback in the Waggle Dance and its Effect on Collective Behavior of Honeybees

171555-Thumbnail Image.png
Description
Social insects collectively exploit food sources by recruiting nestmates, creating positive feedback that steers foraging effort to the best locations. The nature of this positive feedback varies among species, with implications for collective foraging. The mass recruitment trails of many

Social insects collectively exploit food sources by recruiting nestmates, creating positive feedback that steers foraging effort to the best locations. The nature of this positive feedback varies among species, with implications for collective foraging. The mass recruitment trails of many ants are nonlinear, meaning that small increases in recruitment effort yield disproportionately large increases in recruitment success. The waggle dance of honeybees, in contrast, is believed to be linear, meaning that success increases proportionately to effort. However, the implications of this presumed linearityhave never been tested. One such implication is the prediction that linear recruiters will equally exploit two identical food sources, in contrast to nonlinear recruiters, who randomly choose only one of them. I tested this prediction in colonies of honeybees that were isolated in flight cages and presented with two identical sucrose feeders. The results from 15 trials were consistent with linearity, with many cases of equal exploitation of the feeders. In addition, I tested the prediction that linear recruiters can reallocate their forager distribution when unequal feeders are swapped in position. Results from 15 trials were consistent with linearity, with many cases of clear choice for a stronger food source, followed by a subsequent switch with reallocation of foragers to the new location of the stronger food source. These findings show evidence of a linear pattern of nestmate recruitment, with implications for how colonies effectively distribute their foragers across available resources.
Date Created
2022
Agent

On the Numerical Computation of Second Order Control Barrier Functions

171516-Thumbnail Image.png
Description
In recent years, the development of Control Barrier Functions (CBF) has allowed safety guarantees to be placed on nonlinear control affine systems. While powerful as a mathematical tool, CBF implementations on systems with high relative degree constraints can become too

In recent years, the development of Control Barrier Functions (CBF) has allowed safety guarantees to be placed on nonlinear control affine systems. While powerful as a mathematical tool, CBF implementations on systems with high relative degree constraints can become too computationally intensive for real-time control. Such deployments typically rely on the analysis of a system's symbolic equations of motion, leading to large, platform-specific control programs that do not generalize well. To address this, a more generalized framework is needed. This thesis provides a formulation for second-order CBFs for rigid open kinematic chains. An algorithm for numerically computing the safe control input of a CBF is then introduced based on this formulation. It is shown that this algorithm can be used on a broad category of systems, with specific examples shown for convoy platooning, drone obstacle avoidance, and robotic arms with large degrees of freedom. These examples show up to three-times performance improvements in computation time as well as 2-3 orders of magnitude in the reduction in program size.
Date Created
2022
Agent

A Computational Model of Adaptive Capacity to Climate Change

166652-Thumbnail Image.png
Description
Adaptive capacity to climate change is the ability of a system to mitigate or take advantage of climate change effects. Research on adaptive capacity to climate change suffers fragmentation. This is partly because there is no clear consensus around precise

Adaptive capacity to climate change is the ability of a system to mitigate or take advantage of climate change effects. Research on adaptive capacity to climate change suffers fragmentation. This is partly because there is no clear consensus around precise definitions of adaptive capacity. The aim of this thesis is to place definitions of adaptive capacity into a formal framework. I formalize adaptive capacity as a computational model written in the Idris 2 programming language. The model uses types to constrain how the elements of the model fit together. To achieve this, I analyze nine existing definitions of adaptive capacity. The focus of the analysis was on important factors that affect definitions and shared elements of the definitions. The model is able to describe an adaptive capacity study and guide a user toward concepts lacking clarity in the study. This shows that the model is useful as a tool to think about adaptive capacity. In the future, one could refine the model by forming an ontology for adaptive capacity. One could also review the literature more systematically. Finally, one might consider turning to qualitative research methods for reviewing the literature.
Date Created
2022-05
Agent

Robotic Swarm Control using Deep Reinforcement Learning Strategies based on Mean-Field Models

161731-Thumbnail Image.png
Description
As technological advancements in silicon, sensors, and actuation continue, the development of robotic swarms is shifting from the domain of science fiction to reality. Many swarm applications, such as environmental monitoring, precision agriculture, disaster response, and lunar prospecting, will require

As technological advancements in silicon, sensors, and actuation continue, the development of robotic swarms is shifting from the domain of science fiction to reality. Many swarm applications, such as environmental monitoring, precision agriculture, disaster response, and lunar prospecting, will require controlling numerous robots with limited capabilities and information to redistribute among multiple states, such as spatial locations or tasks. A scalable control approach is to program the robots with stochastic control policies such that the robot population in each state evolves according to a mean-field model, which is independent of the number and identities of the robots. Using this model, the control policies can be designed to stabilize the swarm to the target distribution. To avoid the need to reprogram the robots for different target distributions, the robot control policies can be defined to depend only on the presence of a “leader” agent, whose control policy is designed to guide the swarm to a particular distribution. This dissertation presents a novel deep reinforcement learning (deep RL) approach to designing control policies that redistribute a swarm as quickly as possible over a strongly connected graph, according to a mean-field model in the form of the discrete-time Kolmogorov forward equation. In the leader-based strategies, the leader determines its next action based on its observations of robot populations and shepherds the swarm over the graph by probabilistically repelling nearby robots. The scalability of this approach with the swarm size is demonstrated with leader control policies that are designed using two tabular Temporal-Difference learning algorithms, trained on a discretization of the swarm distribution. To improve the scalability of the approach with robot population and graph size, control policies for both leader-based and leaderless strategies are designed using an actor-critic deep RL method that is trained on the swarm distribution predicted by the mean-field model. In the leaderless strategy, the robots’ control policies depend only on their local measurements of nearby robot populations. The control approaches are validated for different graph and swarm sizes in numerical simulations, 3D robot simulations, and experiments on a multi-robot testbed.
Date Created
2021
Agent

A Multi-Scale, Component-Based, Composable Cellular Automata Modeling and Simulation Framework

161726-Thumbnail Image.png
Description
The concept of multi-scale, heterogeneous modeling is well-known to be central in the complexities of natural and built systems. Therefore, whole models that have parts with different spatiotemporal scales are preferred to those specified using a monolithic modeling approach and

The concept of multi-scale, heterogeneous modeling is well-known to be central in the complexities of natural and built systems. Therefore, whole models that have parts with different spatiotemporal scales are preferred to those specified using a monolithic modeling approach and tightly integrated. To build simulation frameworks that are expressive and flexible, model composability is crucial where a whole model's structure and behavior traits must be concisely specified according to those of its parts and their interactions. To undertake the spatiotemporal model composability, a breast cancer cells chemotaxis exemplar is used. In breast cancer biology, the receptors CXCR4+ and CXCR7+ and the secreting CXCL12+ cells are implicated in spreading normal and malignant cells. As discrete entities, these can be modeled using Agent-Based Modeling (ABM). The receptors and ligand bindings with chemokine diffusion regulate the cells' movement gradient. These continuous processes can be modeled as Ordinary Differential Equations (ODE) and Partial Differential Equations (PDE). A customized, text-based BrSimulator exists to model and simulate this kind of breast cancer phenomenon. To build a multi-scale, spatiotemporal simulation framework supporting model composability, this research proposes using composable cellular automata (CCA) modeling. Toward this goal, the Cellular Automata DEVS (CA-DEVS) model is used, and the novel Composable Cellular Automata DEVS (CCA-DEVS) modeling is proposed. The DEVS-Suite simulator is extended to support CA and CCA Parallel DEVS models. This simulator introduces new capabilities for controlled and modular run-time animation and superdense time trajectory visualization. Furthermore, this research proposes using the Knowledge Interchange Broker (KIB) approach to model and simulate the interactions between separate geo-referenced CCA models developed using the DEVS and Modelica modeling languages. To demonstrate the proposed model composability approach and its use in the extended DEVS-Suite simulator, the breast cancer cells chemotaxis and others have been studied. The BrSimulator is used as a proxy for evaluating the proposed model composability approach using an integrated DEVS-Suite and OpenModelica simulator. Simulation experiments are developed that show the composition of spatiotemporal ABM, ODE, and PDE models reproduce the behaviors of the same model developed in the BrSimulator.
Date Created
2021
Agent

Optimization Models and Algorithms for Wildlife Corridor and Reserve Design in Conservation Planning

161516-Thumbnail Image.png
Description
Biodiversity has been declining during the last decades due to habitat loss, landscape deterioration, environmental change, and human-related activities. In addition to its economic and cultural value, biodiversity plays an important role in keeping an environment’s ecosystem in balance. Disrupting

Biodiversity has been declining during the last decades due to habitat loss, landscape deterioration, environmental change, and human-related activities. In addition to its economic and cultural value, biodiversity plays an important role in keeping an environment’s ecosystem in balance. Disrupting such processes can reduce the provision of natural resources such as food and water, which in turn yields a direct threat to human health. Protecting and restoring natural areas is fundamental to preserve biodiversity and to mitigate the effects of ongoing environmental change. Unfortunately, it is impossible to protect every critical area due to resource limitations, requiring the use of advanced decision tools for the design of conservation plans. This dissertation studies three problems on the design of wildlife corridors and reserves that include patch-specific conservation decisions under spatial, operational, ecological, and biological requirements. In addition to the ecological impact of each problem’s solution, this dissertation contributes a set of formulations, valid inequalities, and pre-processing and solution algorithms for optimization problems with spatial requirements. The first problem is a utility-based corridor design problem to connect fragmented habitats, where each patch has a utility value reflecting its quality. The corridor must satisfy geometry requirements such as a connectivity and minimum width. We propose a mix-integer programming (MIP) model to maximize the total utility of the corridor under the given geometry requirements as well as a budget constraint to reflect the acquisition (or restoration) cost of the selected patches. To overcome the computational difficulty when solving large-scale instances, we develop multiple acceleration techniques, including a brand-and-cut algorithm enhanced with problem-specific valid inequalities and a bound-improving heuristic triggered at each integer node in the branch-and-bound exploration. We test the proposed model and solution algorithm using large-scale fabricated instances and a real case study for the design of an ecological corridor for the Florida Panther. Our modeling framework is able to solve instances of up to 1500 patches within 2 hours to optimality or with a small optimality gap. The second problem introduces the species movement across the fragmented landscape into the corridor design problem. The premise is that dispersal dynamics, if available, must inform the design to account for the corridor’s usage by the species. To this end, we propose a spatial discrete-time absorbing Markov chain (DTMC) approach to represent species dispersal and develop short- and long-term landscape usage metrics. We explore two different types of design problems: open and closed corridors. An open corridor is a sequence of landscape patches used by the species to disperse out of a habitat. For this case, we devise a dynamic programming algorithm that implicitly enumerates possible corridors and finds that of maximum probability. The second problem is to find a closed corridor of maximum probability that connects two fragmented habitats. To solve this problem variant, we extended the framework from the utility-based corridor design problem by blending the recursive Markov chain equations with a network flow nonlinear formulation. The third problem leverages on the DTMC approach to explore a reserve design problem with spatial requirements like connectivity and compactness. We approximate the compactness using the concept of maximum reserve diameter, i.e., the largest distance allowed between two patch in the reserve. To solve this problem, we devise a two-stage approach that balances the trade-off between reserve usage probability and compactness. The first stage's problem is to detect a subset of patches of maximum usage probability, while the second stage's problem imposes the geometry requirements on the optimal solution obtained from the first stage. To overcome the computational difficulty of large-scale landscapes, we develop tailored solution algorithms, including a warm-up heuristic to initialize the branch-and-bound exploration, problem-specific valid inequalities, and a decomposition strategy that sequentially solves smaller problems on landscape partitions.
Date Created
2021
Agent

Tragedy Plus Time: Capturing Human Unintended Activities from Weakly-Labeled Videos

161431-Thumbnail Image.png
Description
In videos that contain actions performed unintentionally, agents do not achieve their desired goals. In such videos, it is challenging for computer vision systems to understand high-level concepts such as goal-directed behavior. On the other hand, from a very early

In videos that contain actions performed unintentionally, agents do not achieve their desired goals. In such videos, it is challenging for computer vision systems to understand high-level concepts such as goal-directed behavior. On the other hand, from a very early age, humans are able to understand the relation between an agent and their ultimate goal even if the action gets disrupted or unintentional effects occur. Inculcating this ability in artificially intelligent agents would make them better social learners by not just learning from their own mistakes, i.e, reinforcement learning, but also learning from other's mistakes. For example, this could greatly reduce the search space for artificially intelligent agents for finding the correct action sequence when trying to achieve a new goal, since they would be able to learn from others what not to do as well as how/when actions result in undesired outcomes.To validate this ability of deep learning models to perform this task, the Weakly Augmented Oops (W-Oops) dataset is proposed, built upon the Oops dataset. W-Oops consists of 2,100 unintentional human action videos, with 44 goal-directed and 33 unintentional video-level activity labels collected through human annotations. Inspired by previous methods on tasks such as weakly supervised action localization which show promise for achieving good localization results without ground truth segment annotations, this paper proposes a weakly supervised algorithm for localizing the goal-directed as well as the unintentional temporal region of a video using only video-level labels. In particular, an attention mechanism based strategy is employed that predicts the temporal regions which contributes the most to a classification task, leveraging solely video-level labels. Meanwhile, our designed overlap regularization allows the model to focus on distinct portions of the video for inferring the goal-directed and unintentional activity, while guaranteeing their temporal ordering. Extensive quantitative experiments verify the validity of our localization method.
Date Created
2021
Agent

Developing a Machine Learning Framework for Student Persistence Prediction

161413-Thumbnail Image.png
Description
Student retention is a critical metric for many universities whose intention is to support student success. The goal of this thesis is to create retention models utilizing machine learning (ML) techniques. The factors explored in this research include only those

Student retention is a critical metric for many universities whose intention is to support student success. The goal of this thesis is to create retention models utilizing machine learning (ML) techniques. The factors explored in this research include only those known during the admissions process. These models have two goals: first, to correctly predict as many non-returning students as possible, while minimizing the number of students who are falsely predicted as non-returning. Next, to identify important features in student retention and provide a practical explanation for a student's decision to no longer persist. The models are then used to provide outreach to students that need more support. The findings of this research indicate that the current top performing model is Adaboost which is able to successfully predict non-returning students with an accuracy of 54 percent.
Date Created
2021
Agent

Collaborating in Motion: Distributed and Stochastic Algorithms for Emergent Behavior in Programmable Matter

Description
The world is filled with systems of entities that collaborate in motion, both natural and engineered. These cooperative distributed systems are capable of sophisticated emergent behavior arising from the comparatively simple interactions of their members. A model system for emergent

The world is filled with systems of entities that collaborate in motion, both natural and engineered. These cooperative distributed systems are capable of sophisticated emergent behavior arising from the comparatively simple interactions of their members. A model system for emergent collective behavior is programmable matter, a physical substance capable of autonomously changing its properties in response to user input or environmental stimuli. This dissertation studies distributed and stochastic algorithms that control the local behaviors of individual modules of programmable matter to induce complex collective behavior at the macroscale. It consists of four parts. In the first, the canonical amoebot model of programmable matter is proposed. A key goal of this model is to bring algorithmic theory closer to the physical realities of programmable matter hardware, especially with respect to concurrency and energy distribution. Two protocols are presented that together extend sequential, energy-agnostic algorithms to the more realistic concurrent, energy-constrained setting without sacrificing correctness, assuming the original algorithms satisfy certain conventions. In the second part, stateful distributed algorithms using amoebot memory and communication are presented for leader election, object coating, convex hull formation, and hexagon formation. The first three algorithms are proven to have linear runtimes when assuming a simplified sequential setting. The final algorithm for hexagon formation is instead proven to be correct under unfair asynchronous adversarial activation, the most general of all adversarial activation models. In the third part, distributed algorithms are combined with ideas from statistical physics and Markov chain design to replace algorithm reliance on memory and communication with biased random decisions, gaining inherent self-stabilizing and fault-tolerant properties. Using this stochastic approach, algorithms for compression, shortcut bridging, and separation are designed and analyzed. Finally, a two-pronged approach to "programming" physical ensembles is presented. This approach leverages the physics of local interactions to pair theoretical abstractions of self-organizing particle systems with experimental robot systems of active granular matter that intentionally lack digital computation and communication. By physically embodying the salient features of an algorithm in robot design, the algorithm's theoretical analysis can predict the robot ensemble's behavior. This approach is applied to phototaxing, aggregation, dispersion, and object transport.
Date Created
2021
Agent