Full metadata
Title
Estimating Object Kinematic State Machines Via Human Demonstration
Description
As robots become increasingly integrated into the environments, they need to learn how to interact with the objects around them. Many of these objects are articulated with multiple degrees of freedom (DoF). Multi-DoF objects have complex joints that require specific manipulation orders, but existing methods only consider objects with a single joint. To capture the joint structure and manipulation sequence of any object, I introduce the "Object Kinematic State Machines" (OKSMs), a novel representation that models the kinematic constraints and manipulation sequences of multi-DoF objects. I also present Pokenet, a deep neural network architecture that estimates the OKSMs from the sequence of point cloud data of human demonstrations. I conduct experiments on both simulated and real-world datasets to validate my approach. First, I evaluate the modeling of multi-DoF objects on a simulated dataset, comparing against the current state-of-the-art method. I then assess Pokenet's real-world usability on a dataset collected in my lab, comprising 5,500 data points across 4 objects. Results showcase that my method can successfully estimate joint parameters of novel multi-DoF objects with over 25% more accuracy on average than prior methods.
Date Created
2024
Contributors
- GUPTA, ANMOL (Author)
- Gopalan, Nakul (Thesis advisor)
- Zhang, Yu (Committee member)
- Wang, Yalin (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
46 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.193542
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: M.S., Arizona State University, 2024
Field of study: Computer Engineering
System Created
- 2024-05-02 02:02:02
System Modified
- 2024-05-02 02:02:09
- 6 months 3 weeks ago
Additional Formats