Probabilistic Imitation Learning for Spatiotemporal Human-Robot Interaction

161994-Thumbnail Image.png
Description
Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it.

Imitation learning is a promising methodology for teaching robots how to physically interact and collaborate with human partners. However, successful interaction requires complex coordination in time and space, i.e., knowing what to do as well as when to do it. This dissertation introduces Bayesian Interaction Primitives, a probabilistic imitation learning framework which establishes a conceptual and theoretical relationship between human-robot interaction (HRI) and simultaneous localization and mapping. In particular, it is established that HRI can be viewed through the lens of recursive filtering in time and space. In turn, this relationship allows one to leverage techniques from an existing, mature field and develop a powerful new formulation which enables multimodal spatiotemporal inference in collaborative settings involving two or more agents. Through the development of exact and approximate variations of this method, it is shown in this work that it is possible to learn complex real-world interactions in a wide variety of settings, including tasks such as handshaking, cooperative manipulation, catching, hugging, and more.
Date Created
2021
Agent

Traffic light status detection using movement patterns of vehicles

154964-Thumbnail Image.png
Description
Traditional methods for detecting the status of traffic lights used in autonomous vehicles may be susceptible to errors, which is troublesome in a safety-critical environment. In the case of vision-based recognition methods, failures may arise due to disturbances in the

Traditional methods for detecting the status of traffic lights used in autonomous vehicles may be susceptible to errors, which is troublesome in a safety-critical environment. In the case of vision-based recognition methods, failures may arise due to disturbances in the environment such as occluded views or poor lighting conditions. Some methods also depend on high-precision meta-data which is not always available. This thesis proposes a complementary detection approach based on an entirely new source of information: the movement patterns of other nearby vehicles. This approach is robust to traditional sources of error, and may serve as a viable supplemental detection method. Several different classification models are presented for inferring traffic light status based on these patterns. Their performance is evaluated over real-world and simulation data sets, resulting in up to 97% accuracy in each set.
Date Created
2016
Agent