Generating Natural Language Descriptions from Multimodal Data Traces of Robot Behavior
Description
Natural Language plays a crucial role in human-robot interaction as it is the common ground where human beings and robots can communicate and understand each other. However, most of the work in natural language and robotics is majorly on generating robot actions using a natural language command, which is a unidirectional way of communication. This work focuses on the other direction of communication, where the approach allows a robot to describe its actions from sampled images and joint sequences from the robot task. The importance of this work is that it utilizes multiple modalities, which are the start and end images from the robot task environment and the joint trajectories of the robot arms. The fusion of different modalities is not just about fusing the data but knowing what information to extract from which data sources in such a way that the language description represents the state of the manipulator and the environment that it is performing the task on. From the experimental results of various simulated robot environments, this research demonstrates that utilizing multiple modalities improves the accuracy of the natural language description, and efficiently fusing the modalities is crucial in generating such descriptions by harnessing most of the various data sources.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2021
Agent
- Author (aut): KALIRATHINAM, KAMALESH
- Thesis advisor (ths): Ben Amor, Heni
- Committee member: Phielipp, Mariano
- Committee member: Zhang, Yu
- Publisher (pbl): Arizona State University