Description
Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning framework which establishes a conceptual and theoretical relationship between human-robot interaction (HRI) and simultaneous localization and mapping. In particular, it is established that HRI can be viewed through the lens of recursive filtering in time and space. In turn, this relationship allows one to leverage techniques from an existing, mature field and develop a powerful new formulation which enables multimodal spatiotemporal inference in collaborative settings involving two or more agents. Through the development of exact and approximate variations of this method, it is shown in this work that it is possible to learn complex real-world interactions in a wide variety of settings, including tasks such as handshaking, cooperative manipulation, catching, hugging, and more.
Details
Title
- Probabilistic Imitation Learning for Spatiotemporal Human-Robot Interaction
Contributors
- Campbell, Joseph (Author)
- Ben Amor, Heni (Thesis advisor)
- Fainekos, Georgios (Thesis advisor)
- Yamane, Katsu (Committee member)
- Kambhampati, Subbarao (Committee member)
- Arizona State University (Publisher)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2021
Subjects
Resource Type
Collections this item is in
Note
-
Partial requirement for: Ph.D., Arizona State University, 2021
-
Field of study: Computer Science