Expansion Algorithms in Self-Organizing Particle Systems

136549-Thumbnail Image.png
Description
A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the

A primary goal in computer science is to develop autonomous systems. Usually, we provide computers with tasks and rules for completing those tasks, but what if we could extend this type of system to physical technology as well? In the field of programmable matter, researchers are tasked with developing synthetic materials that can change their physical properties \u2014 such as color, density, and even shape \u2014 based on predefined rules or continuous, autonomous collection of input. In this research, we are most interested in particles that can perform computations, bond with other particles, and move. In this paper, we provide a theoretical particle model that can be used to simulate the performance of such physical particle systems, as well as an algorithm to perform expansion, wherein these particles can be used to enclose spaces or even objects.
Date Created
2015-05
Agent

Compression in Self-Organizing Particle Systems

135739-Thumbnail Image.png
Description
Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a

Many programmable matter systems have been proposed and realized recently, each often tailored toward a particular task or physical setting. In our work on self-organizing particle systems, we abstract away from specific settings and instead describe programmable matter as a collection of simple computational elements (to be referred to as particles) with limited computational power that each perform fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In this thesis, we focus on the compression problem, in which the particle system gathers as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. While there are many ways to formalize what it means for a particle system to be compressed, we address three different notions of compression: (1) local compression, in which each individual particle utilizes local rules to create an overall convex structure containing no holes, (2) hole elimination, in which the particle system seeks to detect and eliminate any holes it contains, and (3) alpha-compression, in which the particle system seeks to shrink its perimeter to be within a constant factor of the minimum possible value. We analyze the behavior of each of these algorithms, examining correctness and convergence where appropriate. In the case of the Markov Chain Algorithm for Compression, we provide improvements to the original bounds for the bias parameter lambda which influences the system to either compress or expand. Lastly, we briefly discuss contributions to the problem of leader election--in which a particle system elects a single leader--since it acts as an important prerequisite for compression algorithms that use a predetermined seed particle.
Date Created
2016-05
Agent

Algorithmic Foundations of Self-Organizing Programmable Matter

155666-Thumbnail Image.png
Description
Imagine that we have a piece of matter that can change its physical properties like its shape, density, conductivity, or color in a programmable fashion based on either user input or autonomous sensing. This is the vision behind what is

Imagine that we have a piece of matter that can change its physical properties like its shape, density, conductivity, or color in a programmable fashion based on either user input or autonomous sensing. This is the vision behind what is commonly known as programmable matter. Envisioning systems of nano-sensors devices, programmable matter consists of systems of simple computational elements, called particles, that can establish and release bonds, compute, and can actively move in a self-organized way. In this dissertation the feasibility of solving fundamental problems relevant for programmable matter is investigated. As a model for such self-organizing particle systems (SOPS), the geometric amoebot model is introduced. In this model, particles only have local information and have modest computational power. They achieve locomotion by expanding and contracting, which resembles the behavior of amoeba. Under this model, efficient local-control algorithms for the leader election problem in SOPS are presented. As a central problem for programmable matter, shape formation problems are then studied. The limitations of solving the leader election problem and the shape formation problem on a more general version of the amoebot model are also discussed. The \smart paint" problem is also studied which aims at having the particles self-organize in order to uniformly coat the surface of an object of arbitrary shape and size, forming multiple coating layers if necessary. A Universal Coating algorithm is presented and shown to be asymptotically worst-case optimal both in terms of time with high probability and work. In particular, the algorithm always terminates within a linear number of rounds with high probability. A linear lower bound on the competitive gap between fully local coating algorithms and coating algorithms that rely on global information is presented, which implies that the proposed algorithm is also optimal in a competitive sense. Simulation results show that the competitive ratio of the proposed algorithm may be better than linear in practice. Developed algorithms utilize only local control, require only constant-size memory particles, and are asymptotically optimal in terms of the total number of particle movements needed to reach the desired shape configuration.
Date Created
2017
Agent

Optimal resource allocation in social and critical infrastructure networks

155032-Thumbnail Image.png
Description
We live in a networked world with a multitude of networks, such as communication networks, electric power grid, transportation networks and water distribution networks, all around us. In addition to such physical (infrastructure) networks, recent years have seen tremendous proliferation

We live in a networked world with a multitude of networks, such as communication networks, electric power grid, transportation networks and water distribution networks, all around us. In addition to such physical (infrastructure) networks, recent years have seen tremendous proliferation of social networks, such as Facebook, Twitter, LinkedIn, Instagram, Google+ and others. These powerful social networks are not only used for harnessing revenue from the infrastructure networks, but are also increasingly being used as “non-conventional sensors” for monitoring the infrastructure networks. Accordingly, nowadays, analyses of social and infrastructure networks go hand-in-hand. This dissertation studies resource allocation problems encountered in this set of diverse, heterogeneous, and interdependent networks. Three problems studied in this dissertation are encountered in the physical network domain while the three other problems studied are encountered in the social network domain.

The first problem from the infrastructure network domain relates to distributed files storage scheme with a goal of enhancing robustness of data storage by making it tolerant against large scale geographically-correlated failures. The second problem relates to placement of relay nodes in a deployment area with multiple sensor nodes with a goal of augmenting connectivity of the resulting network, while staying within the budget specifying the maximum number of relay nodes that can be deployed. The third problem studied in this dissertation relates to complex interdependencies that exist between infrastructure networks, such as power grid and communication network. The progressive recovery problem in an interdependent network is studied whose goal is to maximize system utility over the time when recovery process of failed entities takes place in a sequential manner.

The three problems studied from the social network domain relate to influence propagation in adversarial environment and political sentiment assessment in various states in a country with a goal of creation of a “political heat map” of the country. In the first problem of the influence propagation domain, the goal of the second player is to restrict the influence of the first player, while in the second problem the goal of the second player is to have a larger market share with least amount of initial investment.
Date Created
2016
Agent

Stochastic simulation framework for a data mule network in the Amazon delta

153942-Thumbnail Image.png
Description
This report investigates the improvement in the transmission throughput, when fountain codes are used in opportunistic data routing, for a proposed delay tolerant network to connect remote and isolated communities in the Amazon region in Brazil, to the main city

This report investigates the improvement in the transmission throughput, when fountain codes are used in opportunistic data routing, for a proposed delay tolerant network to connect remote and isolated communities in the Amazon region in Brazil, to the main city of that area. To extend healthcare facilities to the remote and isolated communities, on the banks of river Amazon in Brazil, the network [7] utilizes regularly schedules boats as data mules to carry data from one city to other.

Frequent thunder and rain storms, given state of infrastructure and harsh geographical terrain; all contribute to increase in chances of massages not getting delivered to intended destination. These regions have access to medical facilities only through sporadic visits from medical team from the main city in the region, Belem. The proposed network uses records for routine clinical examinations such as ultrasounds on pregnant women could be sent to the doctors in Belem for evaluation.

However, due to the lack of modern communication infrastructure in these communities and unpredictable boat schedules due to delays and breakdowns, as well as high transmission failures due to the harsh environment in the region, mandate the design of robust delay-tolerant routing algorithms. The work presented here incorporates the unpredictability of the Amazon riverine scenario into the simulation model - accounting for boat mechanical failure in boats leading to delays/breakdowns, possible decrease in transmission speed due to rain and individual packet losses.



Extensive simulation results are presented, to evaluate the proposed approach and to verify that the proposed solution [7] could be used as a viable mode of communication, given the lack of available options in the region. While the simulation results are focused on remote healthcare applications in the Brazilian Amazon, we envision that our approach may also be used for other remote applications, such as distance education, and other similar scenarios.
Date Created
2015
Agent

Generating mixed-level covering arrays of lambda = 2 and test prioritization

153593-Thumbnail Image.png
Description
In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can

In software testing, components are tested individually to make sure each performs as expected. The next step is to confirm that two or more components are able to work together. This stage of testing is often difficult because there can be numerous configurations between just two components.

Covering arrays are one way to ensure a set of tests will cover every possible configuration at least once. However, on systems with many settings, it is computationally intensive to run every possible test. Test prioritization methods can identify tests of greater importance. This concept of test prioritization can help determine which tests can be removed with minimal impact to the overall testing of the system.

This thesis presents three algorithms that generate covering arrays that test the interaction of every two components at least twice. These algorithms extend the functionality of an established greedy test prioritization method to ensure important components are selected in earlier tests. The algorithms are tested on various inputs and the results reveal that on average, the resulting covering arrays are two-fifths to one-half times smaller than a covering array generated through brute force.
Date Created
2015
Agent

Analyzing the effects of Bollinger bands on the probability of stock options using support vector machines

153557-Thumbnail Image.png
Description

The purpose of this research is to efficiently analyze certain data provided and to see if a useful trend can be observed as a result. This trend can be used to analyze certain probabilities. There are three main pieces of

The purpose of this research is to efficiently analyze certain data provided and to see if a useful trend can be observed as a result. This trend can be used to analyze certain probabilities. There are three main pieces of data which are being analyzed in this research: The value for δ of the call and put option, the %B value of the stock, and the amount of time until expiration of the stock option. The %B value is the most important. The purpose of analyzing the data is to see the relationship between the variables and, given certain values, what is the probability the trade makes money. This result will be used in finding the probability certain trades make money over a period of time.

Since options are so dependent on probability, this research specifically analyzes stock options rather than stocks themselves. Stock options have value like stocks except options are leveraged. The most common model used to calculate the value of an option is the Black-Scholes Model [1]. There are five main variables the Black-Scholes Model uses to calculate the overall value of an option. These variables are θ, δ, γ, v, and ρ. The variable, θ is the rate of change in price of the option due to time decay, δ is the rate of change of the option’s price due to the stock’s changing value, γ is the rate of change of δ, v represents the rate of change of the value of the option in relation to the stock’s volatility, and ρ represents the rate of change in value of the option in relation to the interest rate [2]. In this research, the %B value of the stock is analyzed along with the time until expiration of the option. All options have the same δ. This is due to the fact that all the options analyzed in this experiment are less than two months from expiration and the value of δ reveals how far in or out of the money an option is.

The machine learning technique used to analyze the data and the probability



is support vector machines. Support vector machines analyze data that can be classified in one of two or more groups and attempts to find a pattern in the data to develop a model, which reliably classifies similar, future data into the correct group. This is used to analyze the outcome of stock options.

Date Created
2015
Agent

Economical apects of resource allocation under discounts

153342-Thumbnail Image.png
Description
Resource allocation is one of the most challenging issues policy decision makers must address. The objective of this thesis is to explore the resource allocation from an economical perspective, i.e., how to purchase resources in order to satisfy customers' requests.

Resource allocation is one of the most challenging issues policy decision makers must address. The objective of this thesis is to explore the resource allocation from an economical perspective, i.e., how to purchase resources in order to satisfy customers' requests. In this thesis, we attend to answer the question: when and how to buy resources to fulfill customers' demands with minimum costs?

The first topic studied in this thesis is resource allocation in cloud networks. Cloud computing heralded an era where resources (such as computation and storage) can be scaled up and down elastically and on demand. This flexibility is attractive for its cost effectiveness: the cloud resource price depends on the actual utilization over time. This thesis studies two critical problems in cloud networks, focusing on the economical aspects of the resource allocation in the cloud/virtual networks, and proposes six algorithms to address the resource allocation problems for different discount models. The first problem attends a scenario where the virtual network provider offers different contracts to the service provider. Four algorithms for resource contract migration are proposed under two pricing models: Pay-as-You-Come and Pay-as-You-Go. The second problem explores a scenario where a cloud provider offers k contracts each with a duration and a rate respectively and a customer buys these contracts in order to satisfy its resource demand. This work shows that this problem can be seen as a 2-dimensional generalization of the classic online parking permit problem, and present a k-competitive online algorithm and an optimal online algorithm.

The second topic studied in this thesis is to explore how resource allocation and purchasing strategies work in our daily life. For example, is it worth buying a Yoga pass which costs USD 100 for ten entries, although it will expire at the end of this year? Decisions like these are part of our daily life, yet, not much is known today about good online strategies to buy discount vouchers with expiration dates. This work hence introduces a Discount Voucher Purchase Problem (DVPP). It aims to optimize the strategies for buying discount vouchers, i.e., coupons, vouchers, groupons which are valid only during a certain time period. The DVPP comes in three flavors: (1) Once Expire Lose Everything (OELE): Vouchers lose their entire value after expiration. (2) Once Expire Lose Discount (OELD): Vouchers lose their discount value after expiration. (3) Limited Purchasing Window (LPW): Vouchers have the property of OELE and can only be bought during a certain time window.

This work explores online algorithms with a provable competitive ratio against a clairvoyant offline algorithm, even in the worst case. In particular, this work makes the following contributions: we present a 4-competitive algorithm for OELE, an 8-competitive algorithm for OELD, and a lower bound for LPW. We also present an optimal offline algorithm for OELE and LPW, and show it is a 2-approximation solution for OELD.
Date Created
2015
Agent

Construction of GCCFG for inter-procedural optimizations in Software Managed Manycore (SMM)

153070-Thumbnail Image.png
Description
Software Managed Manycore (SMM) architectures - in which each core has only a scratch pad memory (instead of caches), - are a promising solution for scaling memory hierarchy to hundreds of cores. However, in these architectures, the code and data

Software Managed Manycore (SMM) architectures - in which each core has only a scratch pad memory (instead of caches), - are a promising solution for scaling memory hierarchy to hundreds of cores. However, in these architectures, the code and data of the tasks mapped to the cores must be explicitly managed in the software by the compiler. State-of-the-art compiler techniques for SMM architectures require inter-procedural information and analysis. A call graph of the program does not have enough information, and Global CFG, i.e., combining all the control flow graphs of the program has too much information, and becomes too big. As a result, most new techniques have informally defined and used GCCFG (Global Call Control Flow Graph) - a whole program representation which captures the control-flow as well as function call information in a succinct way - to perform inter-procedural analysis. However, how to construct it has not been shown yet. We find that for several simple call and control flow graphs, constructing GCCFG is relatively straightforward, but there are several cases in common applications where unique graph transformation is needed in order to formally and correctly construct the GCCFG. This paper fills this gap, and develops graph transformations to allow the construction of GCCFG in (almost) all cases. Our experiments show that by using succinct representation (GCCFG) rather than elaborate representation (GlobalCFG), the compilation time of state-of-the-art code management technique [4] can be improved by an average of 5X, and that of stack management [20] can be improved by an average of 4X.
Date Created
2014
Agent

Resource allocation in communication and social networks

152500-Thumbnail Image.png
Description
As networks are playing an increasingly prominent role in different aspects of our lives, there is a growing awareness that improving their performance is of significant importance. In order to enhance performance of networks, it is essential that scarce networking

As networks are playing an increasingly prominent role in different aspects of our lives, there is a growing awareness that improving their performance is of significant importance. In order to enhance performance of networks, it is essential that scarce networking resources be allocated smartly to match the continuously changing network environment. This dissertation focuses on two different kinds of networks - communication and social, and studies resource allocation problems in these networks. The study on communication networks is further divided into different networking technologies - wired and wireless, optical and mobile, airborne and terrestrial. Since nodes in an airborne network (AN) are heterogeneous and mobile, the design of a reliable and robust AN is highly complex. The dissertation studies connectivity and fault-tolerance issues in ANs and proposes algorithms to compute the critical transmission range in fault free, faulty and delay tolerant scenarios. Just as in the case of ANs, power optimization and fault tolerance are important issues in wireless sensor networks (WSN). In a WSN, a tree structure is often used to deliver sensor data to a sink node. In a tree, failure of a node may disconnect the tree. The dissertation investigates the problem of enhancing the fault tolerance capability of data gathering trees in WSN. The advent of OFDM technology provides an opportunity for efficient resource utilization in optical networks and also introduces a set of novel problems, such as routing and spectrum allocation (RSA) problem. This dissertation proves that RSA problem is NP-complete even when the network topology is a chain, and proposes approximation algorithms. In the domain of social networks, the focus of this dissertation is study of influence propagation in presence of active adversaries. In a social network multiple vendors may attempt to influence the nodes in a competitive fashion. This dissertation investigates the scenario where the first vendor has already chosen a set of nodes and the second vendor, with the knowledge of the choice of the first, attempts to identify a smallest set of nodes so that after the influence propagation, the second vendor's market share is larger than the first.
Date Created
2014
Agent