Synthesis of Interpretable and Obfuscatory Behaviors in Human-Aware AI Systems
Description
In settings where a human and an embodied AI (artificially intelligent) agent coexist, the AI agent has to be capable of reasoning with the human's preconceived notions about the environment as well as with the human's perception limitations. In addition, it should be capable of communicating intentions and objectives effectively to the human-in-the-loop. While acting in the presence of human observers, the AI agent can synthesize interpretable behaviors like explicable, legible, and assistive behaviors by accounting for the human's mental model (inclusive of her sensor model) in its reasoning process. This thesis will study different behavior synthesis algorithms which focus on improving the interpretability of the agent's behavior in the presence of a human observer. Further, this thesis will study how environment redesign strategies can be leveraged to improve the overall interpretability of the agent's behavior. At times, the agent's environment may also consist of purely adversarial entities or mixed entities (i.e. adversarial as well as cooperative entities), that are trying to infer information from the AI agent's behavior. In such settings, it is crucial for the agent to exhibit obfuscatory behavior that prevents sensitive information from falling into the hands of the adversarial entities. This thesis will show that it is possible to synthesize interpretable as well as obfuscatory behaviors using a single underlying algorithmic framework.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2021
Agent
- Author (aut): Kulkarni, Anagha
- Thesis advisor (ths): Kambhampati, Subbarao
- Committee member: Kamar, Ece
- Committee member: Smith, David E.
- Committee member: Srivastava, Siddharth
- Committee member: Zhang, Yu
- Publisher (pbl): Arizona State University