Full metadata
Title
Learning Interpretable Action Models of Simulated Agents Through Agent Interrogation
Description
Understanding the limits and capabilities of an AI system is essential for safe and effective usability of modern AI systems. In the query-based AI assessment paradigm, a personalized assessment module queries a black-box AI system on behalf of a user and returns a user-interpretable model of the AI system’s capabilities. This thesis develops this paradigm to learn interpretable action models of simulator-based agents. Two types of agents are considered: the first uses high-level actions where the user’s vocabulary captures the simulator state perfectly, and the second operates on low-level actions where the user’s vocabulary captures only an abstraction of the simulator state. Methods are developed to interface the assessment module with these agents. Empirical results show that this method is capable of learning interpretable models of agents operating in a range of domains.
Date Created
2021
Contributors
- Marpally, Shashank Rao (Author)
- Srivastava, Siddharth (Thesis advisor)
- Zhang, Yu (Committee member)
- Fainekos, Georgios E (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
58 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.161715
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: M.S., Arizona State University, 2021
Field of study: Computer Science
System Created
- 2021-11-16 03:25:26
System Modified
- 2021-11-30 12:51:28
- 2 years 11 months ago
Additional Formats