Dynamic Potential Fields for Flexible Behavior-based Swarm Control via Reinforcement Learning
Description
In this thesis work, a novel learning approach to solving the problem of controllinga quadcopter (drone) swarm is explored. To deal with large sizes, swarm control is
often achieved in a distributed fashion by combining different behaviors such that
each behavior implements some desired swarm characteristics, such as avoiding ob-
stacles and staying close to neighbors. One common approach in distributed swarm
control uses potential fields. A limitation of this approach is that the potential fields
often depend statically on a set of control parameters that are manually specified a
priori. This paper introduces Dynamic Potential Fields for flexible swarm control.
These potential fields are modulated by a set of dynamic control parameters (DCPs)
that can change under different environment situations. Since the focus is only on
these DCPs, it simplifies the learning problem and makes it feasible for practical use.
This approach uses soft actor critic (SAC) where the actor only determines how to
modify DCPs in the current situation, resulting in more flexible swarm control. In
the results, this work will show that the DCP approach allows for the drones to bet-
ter traverse environments with obstacles compared to several state-of-the-art swarm
control methods with a fixed set of control parameters. This approach also obtained
a higher safety score commonly used to assess swarm behavior. A basic reinforce-
ment learning approach is compared to demonstrate faster convergence. Finally, an
ablation study is conducted to validate the design of this approach.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2022
Agent
- Author (aut): Ferraro, Calvin Shores
- Thesis advisor (ths): Zhang, Yu
- Committee member: Ben Amor, Hani
- Committee member: Berman, Spring
- Publisher (pbl): Arizona State University