130912-Thumbnail Image.png
Description
Video games often feature agents that the human player interacts with to overcome.
Designing these agents to cover every case of human interaction is difficult, and usually
imperfect, as human players are capable of learning to overcome these agents in unintended
ways. Artificial

Video games often feature agents that the human player interacts with to overcome.
Designing these agents to cover every case of human interaction is difficult, and usually
imperfect, as human players are capable of learning to overcome these agents in unintended
ways. Artificial intelligence is a growing field that seeks to solve problems by simulating
learning in specific environments. The aim of this paper is to explore the applications that the
self play learning branch of artificial intelligence may pose on game development in the future,
and to attempt to implement a working version of a self play agent learning to play a Pokemon
battle. Originally designed Pokemon battle behavior is often suboptimal, getting stuck making
ineffective or incorrect choices, so training a self play model to learn the strategy and structure of
Pokemon battles from a clean slate would result in an organic agent that would outperform the
original behavior of the computer controlled agents. Though unsuccessful in my implementation,
this paper serves as a record of the exploration of this field, and a log of what worked and what
did not, in order to benefit any future person interested in the same topics.


Download restricted.
Restrictions Statement

Barrett Honors College theses and creative projects are restricted to ASU community members.

Download count: 6

Details

Title
  • Self Play Machine Learning and Pokemon
Contributors
Date Created
2020-12
Resource Type
  • Text
  • Machine-readable links