Applications of Conditional Abstractions for Sample Efficient And Scalable Reinforcement Learning

193583-Thumbnail Image.png
Description
Reinforcement Learning (RL) presents a diverse and expansive collection of approaches that enable systems to learn and adapt through interaction with their environments. However, the widespread deployment of RL in real-world applications is hindered by challenges related to sample efficiency

Reinforcement Learning (RL) presents a diverse and expansive collection of approaches that enable systems to learn and adapt through interaction with their environments. However, the widespread deployment of RL in real-world applications is hindered by challenges related to sample efficiency and the interpretability of decision-making processes. This thesis addresses the critical challenges of sample efficiency and interpretability in reinforcement learning (RL), which are pivotal for advancing RL applications in complex, real-world scenarios.This work first presents a novel approach for learning dynamic abstract representations for continuous or parameterized state and action spaces. Empirical evaluations show that the proposed approach achieves a higher sample efficiency and beat state- of-the-art Deep-RL methods. Second, it presents a new approach HOPL for Transfer Reinforcement Learning (RL) for Stochastic Shortest Path (SSP) problems in factored domains with unknown transition functions. This approach continually learns transferable, generalizable knowledge in the form of symbolically represented options and integrates search techniques with RL to solve new problems by efficiently composing the learned options. The empirical results show that the approach achieves superior sample efficiency as compared to SOTA methods for transfering learned knowledge.
Date Created
2024
Agent

Integrating Adversarial Training, Noise Injection, and Mixup into XAI: Pathways to Enhancing Data Efficiency and Generalizability

193555-Thumbnail Image.png
Description
Rapid advancements in artificial intelligence (AI) have revolutionized various do- mains, enabling the development of sophisticated models capable of solving complex problems. However, as AI systems increasingly participate in critical decision-making processes, concerns about their interpretability, robustness, and reliability have

Rapid advancements in artificial intelligence (AI) have revolutionized various do- mains, enabling the development of sophisticated models capable of solving complex problems. However, as AI systems increasingly participate in critical decision-making processes, concerns about their interpretability, robustness, and reliability have in- tensified. Interpretable AI models, such as the Concept-Centric Transformer (CCT), have emerged as promising solutions to enhance transparency in AI models. Yet, in- creasing model interpretability often requires enriching training data with concept ex- planations, escalating training costs. Therefore, intrinsically interpretable models like CCT must be designed to be data-efficient, generalizable—to accommodate smaller training sets—and robust against noise and adversarial attacks. Despite progress in interpretable AI, ensuring the robustness of these models remains a challenge.This thesis enhances the data efficiency and generalizability of the CCT model by integrating four techniques: Perturbation Random Masking (PRM), Attention Random Dropout (ARD), and the integration of manifold mixup and input mixup for memory broadcast. Comprehensive experiments on benchmark datasets such as CIFAR-100, CUB-200-2011, and ImageNet show that the enhanced CCT model achieves modest performance improvements over the original model when using a full training set. Furthermore, this performance gap increases as the training data volume decreases, particularly in few-shot learning scenarios. The enhanced CCT maintains high accuracy with limited data (even without explicitly training on ex- ample concept-level explanations), demonstrating its potential for real-world appli- cations where labeled data are scarce. These findings suggest that the enhancements enable more effective use of CCT in settings with data constraints. Ablation studies reveal that no single technique—PRM, ARD, or mixups—dominates in enhancing performance and data efficiency. Each contributes nearly equally, and their combined application yields the best results, indicating a synergistic effect that bolsters the model’s capabilities without any single method being predominant. The results of this research highlight the efficacy of the proposed enhancements in refining CCT models for greater performance, robustness, and data efficiency. By demonstrating improved performance and resilience, particularly in data-limited sce- narios, this thesis underscores the practical applicability of advanced AI systems in critical decision-making roles.
Date Created
2024
Agent

Multi-Modal Tumor Survival Prediction via Graph-Guided Mixture of Experts

193380-Thumbnail Image.png
Description
Large Language Models (LLMs) have displayed impressive capabilities in handling tasks that require few demonstration examples, making them effective few-shot learn- ers. Despite their potential, LLMs face challenges when it comes to addressing com- plex real-world tasks that involve multiple

Large Language Models (LLMs) have displayed impressive capabilities in handling tasks that require few demonstration examples, making them effective few-shot learn- ers. Despite their potential, LLMs face challenges when it comes to addressing com- plex real-world tasks that involve multiple modalities or reasoning steps. For example, predicting cancer patients’ survival period based on clinical data, cell slides, and ge- nomics poses significant logistical complexities. Although several approaches have been proposed to tackle these challenges, they often fall short in achieving promising performance due to their inability to consider all modalities simultaneously or account for missing modalities, variations in modalities, and the integration of multi-modal data, ultimately compromising their effectiveness.This thesis proposes a novel approach for multi-modal tumor survival prediction to address these limitations. Taking inspiration from recent advancements in LLMs, particularly Mixture of Experts (MoE)-based models, a graph-guided MoE framework is introduced. This framework utilizes a graph structure to manage the predictions effectively and combines multiple models to enhance predictive power. Rather than training a single foundation model for end-to-end survival prediction, the approach leverages a MOE-guided ensemble to manage model callings as tools automatically. By leveraging the strengths of existing models and guiding them through a MOE framework, the aim is to achieve better performance and more accurate predictions in complex real-world tasks. Experiments and analysis on the TCGA-LUAD dataset show improved performance over the individual modal and vanilla ensemble models.
Date Created
2024
Agent

Interpreting Answers to Yes-No Questions in Twitter

190194-Thumbnail Image.png
Description
Interpreting answers to yes-no questions in social media is difficult. Yes and no keywords are uncommon, and when answers include them, they are rarely to be interpreted what the keywords suggest. This work presents a new corpus of 4,442 yes-no

Interpreting answers to yes-no questions in social media is difficult. Yes and no keywords are uncommon, and when answers include them, they are rarely to be interpreted what the keywords suggest. This work presents a new corpus of 4,442 yes-no question answer pairs from Twitter (Twitter-YN). The corpus includes question-answer instances from different temporal settings. These settings allow investigating if having older tweets helps understanding more contemporary tweets. Common linguistic features of answers meaning yes, no as well as those whose interpretation remains unknown are also discussed. Experimental results show that large language models are far from solving this problem, even after fine-tuning and blending other corpora for the same problem but outside social media (F1: 0.59). In addition to English, this work presents a Hindi corpus of 3,409 yes-no questions and answers from Twitter (Twitter-YN-hi). Cross lingual experiments are conducted using a distant supervision approach. It is observed that performance of multilingual large language models to interpret indirect answers to yes-no questions in Hindi can be improved when Twitter-YN is blended with distantly supervised data.
Date Created
2023
Agent