On feature selection stability: a data perspective

151587-Thumbnail Image.png
Description
The rapid growth in the high-throughput technologies last few decades makes the manual processing of the generated data to be impracticable. Even worse, the machine learning and data mining techniques seemed to be paralyzed against these massive datasets. High-dimensionality is

The rapid growth in the high-throughput technologies last few decades makes the manual processing of the generated data to be impracticable. Even worse, the machine learning and data mining techniques seemed to be paralyzed against these massive datasets. High-dimensionality is one of the most common challenges for machine learning and data mining tasks. Feature selection aims to reduce dimensionality by selecting a small subset of the features that perform at least as good as the full feature set. Generally, the learning performance, e.g. classification accuracy, and algorithm complexity are used to measure the quality of the algorithm. Recently, the stability of feature selection algorithms has gained an increasing attention as a new indicator due to the necessity to select similar subsets of features each time when the algorithm is run on the same dataset even in the presence of a small amount of perturbation. In order to cure the selection stability issue, we should understand the cause of instability first. In this dissertation, we will investigate the causes of instability in high-dimensional datasets using well-known feature selection algorithms. As a result, we found that the stability mostly data-dependent. According to these findings, we propose a framework to improve selection stability by solving these main causes. In particular, we found that data noise greatly impacts the stability and the learning performance as well. So, we proposed to reduce it in order to improve both selection stability and learning performance. However, current noise reduction approaches are not able to distinguish between data noise and variation in samples from different classes. For this reason, we overcome this limitation by using Supervised noise reduction via Low Rank Matrix Approximation, SLRMA for short. The proposed framework has proved to be successful on different types of datasets with high-dimensionality, such as microarrays and images datasets. However, this framework cannot handle unlabeled, hence, we propose Local SVD to overcome this limitation.
Date Created
2013
Agent

Design, analysis and resource allocations in networks in presence of region-based faults

151500-Thumbnail Image.png
Description
Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in

Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks.
Date Created
2013
Agent

Optimization for resource-constrained wireless networks

151498-Thumbnail Image.png
Description
Nowadays, wireless communications and networks have been widely used in our daily lives. One of the most important topics related to networking research is using optimization tools to improve the utilization of network resources. In this dissertation, we concentrate on

Nowadays, wireless communications and networks have been widely used in our daily lives. One of the most important topics related to networking research is using optimization tools to improve the utilization of network resources. In this dissertation, we concentrate on optimization for resource-constrained wireless networks, and study two fundamental resource-allocation problems: 1) distributed routing optimization and 2) anypath routing optimization. The study on the distributed routing optimization problem is composed of two main thrusts, targeted at understanding distributed routing and resource optimization for multihop wireless networks. The first thrust is dedicated to understanding the impact of full-duplex transmission on wireless network resource optimization. We propose two provably good distributed algorithms to optimize the resources in a full-duplex wireless network. We prove their optimality and also provide network status analysis using dual space information. The second thrust is dedicated to understanding the influence of network entity load constraints on network resource allocation and routing computation. We propose a provably good distributed algorithm to allocate wireless resources. In addition, we propose a new subgradient optimization framework, which can provide findgrained convergence, optimality, and dual space information at each iteration. This framework can provide a useful theoretical foundation for many networking optimization problems. The study on the anypath routing optimization problem is composed of two main thrusts. The first thrust is dedicated to understanding the computational complexity of multi-constrained anypath routing and designing approximate solutions. We prove that this problem is NP-hard when the number of constraints is larger than one. We present two polynomial time K-approximation algorithms. One is a centralized algorithm while the other one is a distributed algorithm. For the second thrust, we study directional anypath routing and present a cross-layer design of MAC and routing. For the MAC layer, we present a directional anycast MAC. For the routing layer, we propose two polynomial time routing algorithms to compute directional anypaths based on two antenna models, and prove their ptimality based on the packet delivery ratio metric.
Date Created
2013
Agent

Stochastic optimization and real-time scheduling in cyber-physical systems

151324-Thumbnail Image.png
Description
A principal goal of this dissertation is to study stochastic optimization and real-time scheduling in cyber-physical systems (CPSs) ranging from real-time wireless systems to energy systems to distributed control systems. Under this common theme, this dissertation can be broadly organized

A principal goal of this dissertation is to study stochastic optimization and real-time scheduling in cyber-physical systems (CPSs) ranging from real-time wireless systems to energy systems to distributed control systems. Under this common theme, this dissertation can be broadly organized into three parts based on the system environments. The first part investigates stochastic optimization in real-time wireless systems, with the focus on the deadline-aware scheduling for real-time traffic. The optimal solution to such scheduling problems requires to explicitly taking into account the coupling in the deadline-aware transmissions and stochastic characteristics of the traffic, which involves a dynamic program that is traditionally known to be intractable or computationally expensive to implement. First, real-time scheduling with adaptive network coding over memoryless channels is studied, and a polynomial-time complexity algorithm is developed to characterize the optimal real-time scheduling. Then, real-time scheduling over Markovian channels is investigated, where channel conditions are time-varying and online channel learning is necessary, and the optimal scheduling policies in different traffic regimes are studied. The second part focuses on the stochastic optimization and real-time scheduling involved in energy systems. First, risk-aware scheduling and dispatch for plug-in electric vehicles (EVs) are studied, aiming to jointly optimize the EV charging cost and the risk of the load mismatch between the forecasted and the actual EV loads, due to the random driving activities of EVs. Then, the integration of wind generation at high penetration levels into bulk power grids is considered. Joint optimization of economic dispatch and interruptible load management is investigated using short-term wind farm generation forecast. The third part studies stochastic optimization in distributed control systems under different network environments. First, distributed spectrum access in cognitive radio networks is investigated by using pricing approach, where primary users (PUs) sell the temporarily unused spectrum and secondary users compete via random access for such spectrum opportunities. The optimal pricing strategy for PUs and the corresponding distributed implementation of spectrum access control are developed to maximize the PU's revenue. Then, a systematic study of the nonconvex utility-based power control problem is presented under the physical interference model in ad-hoc networks. Distributed power control schemes are devised to maximize the system utility, by leveraging the extended duality theory and simulated annealing.
Date Created
2012
Agent

Cognitive communications in white space: opportunistic scheduling, spectrum shaping and delay analysis

151078-Thumbnail Image.png
Description
A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under

A unique feature, yet a challenge, in cognitive radio (CR) networks is the user hierarchy: secondary users (SU) wishing for data transmission must defer in the presence of active primary users (PUs), whose priority to channel access is strictly higher.Under a common thread of characterizing and improving Quality of Service (QoS) for the SUs, this dissertation is progressively organized under two main thrusts: the first thrust focuses on SU's throughput by exploiting the underlying properties of the PU spectrum to perform effective scheduling algorithms; and the second thrust aims at another important QoS performance of the SUs, namely delay, subject to the impact of PUs' activities, and proposes enhancement and control mechanisms. More specifically, in the first thrust, opportunistic spectrum scheduling for SU is first considered by jointly exploiting the memory in PU's occupancy and channel fading. In particular, the underexplored scenario where PU occupancy presents a {long} temporal memory is taken into consideration. By casting the problem as a partially observable Markov decision process, a set of {multi-tier} tradeoffs are quantified and illustrated. Next, a spectrum shaping framework is proposed by leveraging network coding as a {spectrum shaper} on the PU's traffic. Such shaping effect brings in predictability of the primary spectrum, which is utilized by the SUs to carry out adaptive channel sensing by prioritizing channel access order, and hence significantly improve their throughput. On the other hand, such predictability can make wireless channels more susceptible to jamming attacks. As a result, caution must be taken in designing wireless systems to balance the throughput and the jamming-resistant capability. The second thrust turns attention to an equally important performance metric, i.e., delay performance. Specifically, queueing delay analysis is conducted for SUs employing random access over the PU channels. Fluid approximation is taken and Poisson driven stochastic differential equations are applied to characterize the moments of the SUs' steady-state queueing delay. Then, dynamic packet generation control mechanisms are developed to meet the given delay requirements for SUs.
Date Created
2012
Agent

Robust and efficient medium access despite jamming

151063-Thumbnail Image.png
Description
Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four

Interference constitutes a major challenge for communication networks operating over a shared medium where availability is imperative. This dissertation studies the problem of designing and analyzing efficient medium access protocols which are robust against strong adversarial jamming. More specifically, four medium access (MAC) protocols (i.e., JADE, ANTIJAM, COMAC, and SINRMAC) which aim to achieve high throughput despite jamming activities under a variety of network and adversary models are presented. We also propose a self-stabilizing leader election protocol, SELECT, that can effectively elect a leader in the network with the existence of a strong adversary. Our protocols can not only deal with internal interference without the exact knowledge on the number of participants in the network, but they are also robust to unintentional or intentional external interference, e.g., due to co-existing networks or jammers. We model the external interference by a powerful adaptive and/or reactive adversary which can jam a (1 − ε)-portion of the time steps, where 0 < ε ≤ 1 is an arbitrary constant. We allow the adversary to be adaptive and to have complete knowledge of the entire protocol history. Moreover, in case the adversary is also reactive, it uses carrier sensing to make informed decisions to disrupt communications. Among the proposed protocols, JADE, ANTIJAM and COMAC are able to achieve Θ(1)-competitive throughput with the presence of the strong adversary; while SINRMAC is the first attempt to apply SINR model (i.e., Signal to Interference plus Noise Ratio), in robust medium access protocols design; the derived principles are also useful to build applications on top of the MAC layer, and we present SELECT, which is an exemplary study for leader election, which is one of the most fundamental tasks in distributed computing.
Date Created
2012
Agent

Adapting sensing and transmission times to improve secondary user throughput in cognitive radio ad hoc networks

150953-Thumbnail Image.png
Description
Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting,

Cognitive Radios (CR) are designed to dynamically reconfigure their transmission and/or reception parameters to utilize the bandwidth efficiently. With a rapidly fluctuating radio environment, spectrum management becomes crucial for cognitive radios. In a Cognitive Radio Ad Hoc Network (CRAHN) setting, the sensing and transmission times of the cognitive radio play a more important role because of the decentralized nature of the network. They have a direct impact on the throughput. Due to the tradeoff between throughput and the sensing time, finding optimal values for sensing time and transmission time is difficult. In this thesis, a method is proposed to improve the throughput of a CRAHN by dynamically changing the sensing and transmission times. To simulate the CRAHN setting, ns-2, the network simulator with an extension for CRAHN is used. The CRAHN extension module implements the required Primary User (PU) and Secondary User (SU) and other CR functionalities to simulate a realistic CRAHN scenario. First, this work presents a detailed analysis of various CR parameters, their interactions, their individual contributions to the throughput to understand how they affect the transmissions in the network. Based on the results of this analysis, changes to the system model in the CRAHN extension are proposed. Instantaneous throughput of the network is introduced in the new model, which helps to determine how the parameters should adapt based on the current throughput. Along with instantaneous throughput, checks are done for interference with the PUs and their transmission power, before modifying these CR parameters. Simulation results demonstrate that the throughput of the CRAHN with the adaptive sensing and transmission times is significantly higher as compared to that of non-adaptive parameters.
Date Created
2012
Agent

System level power and thermal management on embedded processors

150660-Thumbnail Image.png
Description
Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components

Semiconductor scaling technology has led to a sharp growth in transistor counts. This has resulted in an exponential increase on both power dissipation and heat flux (or power density) in modern microprocessors. These microprocessors are integrated as the major components in many modern embedded devices, which offer richer features and attain higher performance than ever before. Therefore, power and thermal management have become the significant design considerations for modern embedded devices. Dynamic voltage/frequency scaling (DVFS) and dynamic power management (DPM) are two well-known hardware capabilities offered by modern embedded processors. However, the power or thermal aware performance optimization is not fully explored for the mainstream embedded processors with discrete DVFS and DPM capabilities. Many key problems have not been answered yet. What is the maximum performance that an embedded processor can achieve under power or thermal constraint for a periodic application? Does there exist an efficient algorithm for the power or thermal management problems with guaranteed quality bound? These questions are hard to be answered because the discrete settings of DVFS and DPM enhance the complexity of many power and thermal management problems, which are generally NP-hard. The dissertation presents a comprehensive study on these NP-hard power and thermal management problems for embedded processors with discrete DVFS and DPM capabilities. In the domain of power management, the dissertation addresses the power minimization problem for real-time schedules, the energy-constrained make-span minimization problem on homogeneous and heterogeneous chip multiprocessors (CMP) architectures, and the battery aware energy management problem with nonlinear battery discharging model. In the domain of thermal management, the work addresses several thermal-constrained performance maximization problems for periodic embedded applications. All the addressed problems are proved to be NP-hard or strongly NP-hard in the study. Then the work focuses on the design of the off-line optimal or polynomial time approximation algorithms as solutions in the problem design space. Several addressed NP-hard problems are tackled by dynamic programming with optimal solutions and pseudo-polynomial run time complexity. Because the optimal algorithms are not efficient in worst case, the fully polynomial time approximation algorithms are provided as more efficient solutions. Some efficient heuristic algorithms are also presented as solutions to several addressed problems. The comprehensive study answers the key questions in order to fully explore the power and thermal management potentials on embedded processors with discrete DVFS and DPM capabilities. The provided solutions enable the theoretical analysis of the maximum performance for periodic embedded applications under power or thermal constraints.
Date Created
2012
Agent

Sparse learning package with stability selection and application to alzheimer's disease

150190-Thumbnail Image.png
Description
Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant

Sparse learning is a technique in machine learning for feature selection and dimensionality reduction, to find a sparse set of the most relevant features. In any machine learning problem, there is a considerable amount of irrelevant information, and separating relevant information from the irrelevant information has been a topic of focus. In supervised learning like regression, the data consists of many features and only a subset of the features may be responsible for the result. Also, the features might require special structural requirements, which introduces additional complexity for feature selection. The sparse learning package, provides a set of algorithms for learning a sparse set of the most relevant features for both regression and classification problems. Structural dependencies among features which introduce additional requirements are also provided as part of the package. The features may be grouped together, and there may exist hierarchies and over- lapping groups among these, and there may be requirements for selecting the most relevant groups among them. In spite of getting sparse solutions, the solutions are not guaranteed to be robust. For the selection to be robust, there are certain techniques which provide theoretical justification of why certain features are selected. The stability selection, is a method for feature selection which allows the use of existing sparse learning methods to select the stable set of features for a given training sample. This is done by assigning probabilities for the features: by sub-sampling the training data and using a specific sparse learning technique to learn the relevant features, and repeating this a large number of times, and counting the probability as the number of times a feature is selected. Cross-validation which is used to determine the best parameter value over a range of values, further allows to select the best parameter value. This is done by selecting the parameter value which gives the maximum accuracy score. With such a combination of algorithms, with good convergence guarantees, stable feature selection properties and the inclusion of various structural dependencies among features, the sparse learning package will be a powerful tool for machine learning research. Modular structure, C implementation, ATLAS integration for fast linear algebraic subroutines, make it one of the best tool for a large sparse setting. The varied collection of algorithms, support for group sparsity, batch algorithms, are a few of the notable functionality of the SLEP package, and these features can be used in a variety of fields to infer relevant elements. The Alzheimer Disease(AD) is a neurodegenerative disease, which gradually leads to dementia. The SLEP package is used for feature selection for getting the most relevant biomarkers from the available AD dataset, and the results show that, indeed, only a subset of the features are required to gain valuable insights.
Date Created
2011
Agent

Multi-task learning via structured regularization: formulations, algorithms, and applications

150095-Thumbnail Image.png
Description
Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across

Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms.
Date Created
2011
Agent