Full metadata
Title
Learning Complex Behaviors from Simple Ones: An analysis of Behavior-based Modular Design for RL Agents
Description
Traditional Reinforcement Learning (RL) assumes to learn policies with respect to reward available from the environment but sometimes learning in a complex domain requires wisdom which comes from a wide range of experience. In behavior based robotics, it is observed that a complex behavior can be described by a combination of simpler behaviors. It is tempting to apply similar idea such that simpler behaviors can be combined in a meaningful way to tailor the complex combination. Such an approach would enable faster learning and modular design of behaviors. Complex behaviors can be combined with other behaviors to create even more advanced behaviors resulting in a rich set of possibilities. Similar to RL, combined behavior can keep evolving by interacting with the environment. The requirement of this method is to specify a reasonable set of simple behaviors. In this research, I present an algorithm that aims at combining behavior such that the resulting behavior has characteristics of each individual behavior. This approach has been inspired by behavior based robotics, such as the subsumption architecture and motor schema-based design. The combination algorithm outputs n weights to combine behaviors linearly. The weights are state dependent and change dynamically at every step in an episode. This idea is tested on discrete and continuous environments like OpenAI’s “Lunar Lander” and “Biped Walker”. Results are compared with related domains like Multi-objective RL, Hierarchical RL, Transfer learning, and basic RL. It is observed that the combination of behaviors is a novel way of learning which helps the agent achieve required characteristics. A combination is learned for a given state and so the agent is able to learn faster in an efficient manner compared to other similar approaches. Agent beautifully demonstrates characteristics of multiple behaviors which helps the agent to learn and adapt to the environment. Future directions are also suggested as possible extensions to this research.
Date Created
2021
Contributors
- Vora, Kevin Jatin (Author)
- Zhang, Yu (Thesis advisor)
- Yang, Yezhou (Committee member)
- Praharaj, Sarbeswar (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
56 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.161939
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: M.S., Arizona State University, 2021
Field of study: Computer Science
System Created
- 2021-11-16 05:21:22
System Modified
- 2021-11-30 12:51:28
- 2 years 11 months ago
Additional Formats