Pitting Game-Playing Agent Against Game-Designing Agent

Original article was published on Artificial Intelligence on Medium


Pitting Game-Playing Agent Against Game-Designing Agent

Overview of the paper “Fully Differentiable Procedural Content Generation through Generative Playing Networks” by Bontrager et. al.

In an earlier article, I had shared a Procedural Content Generation (PCG) paper that showed how we can use Reinforcement Learning to train an agent that can design game levels, instead of playing them. Once this agent designs a game level, the quality of that level was evaluated using a hand-crafted method, which made it cumbersome to apply this method to multiple different games.

Learning Game-Level Generation with reinforcement Learning. [source]

So today, I want to share a follow-up paper from the same research group which is titled “Fully Differentiable Procedural Content Generation through Generative Playing Networks” by P. Bontrager and J. Togelius. The key difference in this paper is that it simultaneously trains a game-playing RL agent and a game level-generating agent where both agents are in a symbiotic relationship.