Multi Agent Bayesian Optimization
Description
Efficiently solving global optimization problems remains a pervasive challenge across diverse domains, characterized by complex, high-dimensional search spaces with non-convexity and noise. Most of the approaches in the Bayesian optimization literature have highlighted the computational complexity involved when scaling to high dimensions. Non myopic approximations over a finite horizon has been adopted in recent years by modeling the problem as a partially observable Markov Decision Process (MDP). Another promising direction is the partitioning of the input domain into sub regions facilitating local modeling of the input space. This localized approach helps prioritize regions of interest, which is particularly crucial in high dimensions. However, very few literature exist which leverage agent based modeling of the problem domain along with the aforementioned methodologies. This work explores the synergistic integration of Bayesian Optimization and Reinforcement Learning by proposing a Multi Agent Rollout formulation of the global optimization problem. Multi Agent Bayesian Optimization (MABO) partitions the input domain among a finite set of agents enabling distributed modeling of the input space. In addition to selecting candidate samples from their respective sub regions, these agents also influence each other in partitioning the sub regions. Consequently, a portion of the function is optimized by these agents which prioritize candidate samples that don't undermine exploration in favor of a single step greedy exploitation. This work highlights the efficacy of the algorithm on a range of complex synthetic test functions.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2024
Agent
- Author (aut): Nambiraja, Shyam Sundar
- Thesis advisor (ths): Pedrielli, Giulia
- Committee member: Bertsekas, Dimitri
- Committee member: Gopalan, Nakul
- Publisher (pbl): Arizona State University