Description
The goal of reinforcement learning is to enable systems to autonomously solve tasks in the real world, even in the absence of prior data. To succeed in such situations, reinforcement learning algorithms collect new experience through interactions with the environment to further the learning process. The behaviour is optimized by maximizing a reward function, which assigns high numerical values to desired behaviours. Especially in robotics, such interactions with the environment are expensive in terms of the required execution time, human involvement, and mechanical degradation of the system itself. Therefore, this thesis aims to introduce sample-efficient reinforcement learning methods which are applicable to real-world settings and control tasks such as bimanual manipulation and locomotion. Sample efficiency is achieved through directed exploration, either by using dimensionality reduction or trajectory optimization methods. Finally, it is demonstrated how data-efficient reinforcement learning methods can be used to optimize the behaviour and morphology of robots at the same time.
Download count: 8
Details
Title
- Sample-Efficient Reinforcement Learning of Robot Control Policies in the Real World
Contributors
- Luck, Kevin Sebastian (Author)
- Ben Amor, Hani (Thesis advisor)
- Aukes, Daniel (Committee member)
- Fainekos, Georgios (Committee member)
- Scholz, Jonathan (Committee member)
- Yang, Yezhou (Committee member)
- Arizona State University (Publisher)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2019
Subjects
Resource Type
Collections this item is in
Note
- Doctoral Dissertation Computer Science 2019