Description
Learning longer-horizon tasks is challenging with techniques such as reinforcement learning and behavior cloning. Previous approaches have split these long tasks into shorter tasks that are easier to learn by using statistical change point detection methods. However, classical changepoint detection methods function only with low-dimensional robot trajectory data and not with high-dimensional inputs such as vision. In this thesis, I have split a long horizon tasks, represented by trajectories into short-horizon sub-tasks with the supervision of language. These shorter horizon tasks can be learned using conventional behavior cloning approaches. I found comparisons between the techniques from the video moment retrieval problem and changepoint detection in robot trajectory data consisting of high-dimensional data. The proposed moment retrieval-based approach shows a more than 30% improvement in mean average precision (mAP) for identifying trajectory sub-tasks with language guidance compared to that without language. Several ablations are performed to understand the effects of domain randomization, sample complexity, views, and sim-to-real transfer of this method. The data ablation shows that just with a 100 labeled trajectories a 42.01 mAP can be achieved, demonstrating the sample efficiency of using such an approach. Further, behavior cloning models trained on the segmented trajectories outperform a single model trained on the whole trajectory by up to 20%.
Details
Title
- Learning Temporally Composable Task Segmentations with Language
Contributors
- Raj, Divyanshu (Author)
- Gopalan, Nakul (Thesis advisor)
- Baral, Chitta (Committee member)
- Senanayake, Ransalu (Committee member)
- Arizona State University (Publisher)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2024
Resource Type
Collections this item is in
Note
-
Partial requirement for: M.S., Arizona State University, 2024
-
Field of study: Computer Science