Description
Reinforcement Learning (RL) presents a diverse and expansive collection of approaches that enable systems to learn and adapt through interaction with their environments. However, the widespread deployment of RL in real-world applications is hindered by challenges related to sample efficiency and the interpretability of decision-making processes. This thesis addresses the critical challenges of sample efficiency and interpretability in reinforcement learning (RL), which are pivotal for advancing RL applications in complex, real-world scenarios.This work first presents a novel approach for learning dynamic abstract representations for continuous or parameterized state and action spaces. Empirical evaluations show that the proposed approach achieves a higher sample efficiency and beat state- of-the-art Deep-RL methods. Second, it presents a new approach HOPL for Transfer Reinforcement Learning (RL) for Stochastic Shortest Path (SSP) problems in factored domains with unknown transition functions. This approach continually learns transferable, generalizable knowledge in the form of symbolically represented options and integrates search techniques with RL to solve new problems by efficiently composing the learned options. The empirical results show that the approach achieves superior sample efficiency as compared to SOTA methods for transfering learned knowledge.
Details
Title
- Applications of Conditional Abstractions for Sample Efficient And Scalable Reinforcement Learning
Contributors
- Verma, Shivanshu (Author)
- Srivastava, Siddharth (Thesis advisor)
- Gopalan, Nakul (Committee member)
- Choi, YooJung (Committee member)
- Arizona State University (Publisher)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2024
Subjects
Resource Type
Collections this item is in
Note
- Partial requirement for: M.S., Arizona State University, 2024
- Field of study: Computer Science