Full metadata
Title
Meta-Learning in Edge Networks: Model-Based Reinforcement Learning and Distributed Edge Learning
Description
Pushing the artificial intelligence frontier to resource-constrained edge nodes for edge intelligence is nontrivial. This dissertation provides a comprehensive study of optimization-based meta-learning algorithms to build a theoretic foundation of edge intelligence, with the focus on two topics: 1) model-based reinforcement learning (RL); 2) distributed edge learning. Under this common theme, this study is broadly organized into two parts. The first part studies meta-learning algorithms for model-based RL. First, the fundamental limit of model learning is explored for linear time-varying systems, using a two-step meta-learning algorithm with an episodic block model. A comprehensive non-asymptotic analysis of the sample complexity is provided, where a two-scale martingale small-ball approach is devised to address the challenges in sample correlation and small sample sizes. Next, policy learning of offline RL in general Markov decision processes is explored. To tackle the challenges therein, e.g., value overestimation and possibly poor quality of offline datasets, a model-based offline Meta-RL approach with regularized policy optimization is proposed, by learning a meta-model for task inference and a meta-policy for safe exploration of out-of-distribution state-actions. The second part investigates meta-learning algorithms for distributed edge learning. First, the general edge supervised learning is considered, where the edge node aims to quickly learn a good model with limited samples. A platform-aided collaborative learning framework is proposed to learn a model initialization via federated meta-learning across multiple nodes, which is transferred to target nodes for fine-tuning. Then, a channel gating module is introduced to select important channels of backbone networks for efficient local computation. A novel federated meta-learning approach is developed to learn meta-initializations for backbone networks and gating modules, from which a task-specific channel gated network is quickly adapted. Taking one step further, the continual edge learning is investigated in the context of online meta-learning, where each node has a sequence of online tasks. A multi-agent online meta-learning framework is developed to accelerate the task-average performance in a single node under limited communication among neighbors, through the lens of distributed online convex optimization. Building on distributed online gradient descent with gradient tracking, the optimal task-average regret is achieved at a faster rate.
Date Created
2021
Contributors
- Lin, Sen (Author)
- Zhang, Junshan JZ (Thesis advisor)
- Ying, Lei LY (Thesis advisor)
- Bertsekas, Dimitri DB (Committee member)
- Nedich, Angelia AN (Committee member)
- Wang, Weina WW (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
280 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.168294
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: Ph.D., Arizona State University, 2021
Field of study: Electrical Engineering
System Created
- 2022-08-22 01:51:53
System Modified
- 2022-08-22 01:52:18
- 2 years 3 months ago
Additional Formats