Building the Arena: A Return to Roots
My name is Ian Olmstead, and I want to build video games like I did as a kid.
That’s really what this is about. Sure, I’ve spent years working in data science, building machine learning models, and helping clients solve complex problems through Third Eye Consulting. But there’s something about game development that’s always pulled me back—that same spark I felt as a teenager writing my first lines of code, watching pixels move across a screen because I told them to.
So I’m building something. Not for profit, not for clients, just for the pure joy of creation. Battle Arena: Robots & Zombies & Ghosts is my free, open-source passion project, and I’m excited to share it with you.
What Is Battle Arena?
At its core, Battle Arena is an interactive 2D isometric environment where autonomous AI agents learn, fight, and evolve. Think of it as a decision-making laboratory disguised as a game. Robots, zombies, and ghosts spawn into the arena and begin navigating, strategizing, and engaging in combat—all while the system records every move, every decision, every outcome.
But this isn’t just about watching AI duke it out (though that’s admittedly pretty fun). The real magic happens in what comes next: the data these agents generate feeds back into training pipelines, creating progressively smarter behaviors. Each generation of agents learns from the previous one, developing more sophisticated strategies through the iterative cycle of play, data collection, and model refinement.
The arena supports multiple learning approaches. Reinforcement learning agents can train through self-play, discovering optimal policies through trial and error. Human players can also take direct control, generating high-quality supervised learning datasets through their own strategic decisions. This hybrid approach creates a rich environment for experimentation and discovery.
Why Build This?
I’ve always been fascinated by the intersection of games and machine learning. Games provide perfect sandboxes for testing AI systems—they have clear rules, measurable outcomes, and endless opportunities for experimentation. Battle Arena takes that concept and runs with it, creating an endless learning loop where agents continuously improve through cumulative training experiences.
The vision is straightforward: as models get better, they generate better training data, which produces even stronger agents in subsequent runs. It’s a virtuous cycle of iterative development, all happening within a dynamic combat environment that’s genuinely engaging to watch and interact with.
Built in Godot, Built for Experimentation
The project is built using Godot Engine 4.5 and GDScript, with an architecture that emphasizes modularity and extensibility. The design separates concerns cleanly—agent controllers, physics systems, animation management, and data logging all operate independently, making it easy to swap in new decision-making systems or experiment with different agent architectures.
This isn’t a polished commercial product, and it’s not trying to be. It’s an experimental platform designed for tinkering, testing, and learning. Whether you’re interested in reinforcement learning, behavior cloning, multi-agent dynamics, or just want to see what happens when you pit different AI architectures against each other in combat, the arena provides a structured environment to explore those questions.
Development Progress and Contributing
I’m tracking development progress through a Jira board that covers the ongoing work, planned features, and experimental directions. The project is very much active and evolving.
If you’re interested in contributing—whether that’s implementing new agent architectures, designing novel reward structures, collecting data under different configurations, or just experimenting with the system—I’d love to hear from you. This is open source in the truest sense: a collaborative space for learning and discovery.
Interested contributors can reach out to me at iolmstead@3rdeyedata.com.
What’s Next
The roadmap ahead includes continued iteration on agent behaviors, enhanced data collection pipelines, integration with various machine learning frameworks, and exploration of emergent multi-agent dynamics. But honestly, part of the excitement is not knowing exactly where this will go. The best parts of game development often emerge from experimentation and happy accidents.
I’m building this because I love building things. Because I want to recapture that teenage excitement of watching something I created come to life on screen. Because machine learning and game development together create possibilities that didn’t exist when I was writing my first games as a kid.
If that resonates with you, come check out the project on GitHub at robots-zombies-ghosts. Fork it, break it, improve it, experiment with it. That’s what it’s there for.
Let’s build something interesting together.