What a great lunch read! I've been weekend-warrioring a terminal-based CRPG for a bit myself. I was recently exploring ways to use agents to help with balance testing, which is a real scale problem for solo indie dev. So far, all I've created is a fight simulator: essentially, have the current player state (stats, effects, gear, companions, etc.) do this fight, simulated, X number of times using one of the currently-implemented GOAP personalities and report how often it wins, loses, average end turn, stuff like that.
I hadn't really thought about trying to create a harness for agents to play the full game interactively. I'd love to explore this. If you don't mind, here are a few questions:
1) Correct to assume that I probably need a text-only harness even though my game is text-based already because I do make use of menu selections made via arrow-key-and-enter interactions?
2) Do you have prompt recommendations for the type of feedback you have found to be useful? I would guess in your case, the objectives of the game are more clear than an open-world RPG. What dead ends have you run into? Maybe a variety of approaches would be good? One agent tries to fight everything. Another focuses on gaining and completing as many quests as possible?
3) How bad is the token burn doing this? Any optimization strategies you've employed?
I did something similar, but instead of having the LLM play the game I had it build an entire bot system to play the game. Bots require much more determinism, but I'd rather burn tokens encoding problem solving approaches and bot decision profiles than using LLMs for every turn of the game. This can be developed rapidly if you create an agent in a loop and say "figure out how to have the bot reach room 3 in under 10 actions" or something like that. It is easy for this to get bloated, but I found it makes a nice feedback loop that allows me to quickly test things like pacing changes and think of the game as a series of user actions that can be sculpted purposefully.
I landed on something similar for my own game, though it's been pretty tricky.
I'm building a physics-based 2d game involving slingshotting around planets. The realtime nature of it has meant that it's nearly impossible for the AI to test using a browser mcp. It'll take one screenshot, then another, and in the intervening time the player shot off the map and into deep space.
Instead I gave it both a code-level api to step forward and backward the physics engine and a browser-based, `window.game` api to do it via a browser mcp console. The former helps it work out physics bugs and the latter helps it test animation and UI issues.
It's still not great. I keep occasionally getting "I tested it and it works perfectly!" as I stare at the mcp'd browser with the player stuck clipped halfway into a planet. I think, if anything, I need to lean harder into this approach: building really solid tooling for the AI to inspect every aspect of state. I would kill for a turn-based game like OP XD
I've been doing something similar on my own weekend game! I've got two games in rust I'm working on, a simple one in tauri and a more traditional 2D game. For both, I added a CLI that allows me or AI to play the game and test. It hooks into the actual game state just like here as another way to "render" the game. I think this is pretty similar to end-to-end testing strategies, but with the current state of AI you can have really interesting testing while you're building something. I appreciate starting a fresh AI with no context on the game and giving it just instructions on how to use the CLI. It's an extra pair of eyes for rubber-ducking.
I've been doing this lately, building a Godot game with Copilot CLI. I'm using Godot MCP Pro which can automate interactions and screenshots, and have the whole game script in a markdown doc. I was happily surprised when I asked for a walkthrough and it all just worked, found and fixed some regressions while I was sleeping.
I recently added E2E tests in my game too. One of the benefits is that I can have my agent verify its own work by asking it write a test and look at screenshots. Which means I can say “I’m going to bed, implement this and verify it with e2e tests” and it gets further along than it used to
This is sick, thanks for sharing! We've been working on very similar things for the past 2 years. We also started with a text-only representation, but sadly quickly realized that only a small subset of games work well with this.
So we went down a rabbit hole and decided to do everything purely based on pixels and OS inputs.
We're currently only live for mobile but happy to give you early access to nunu ai for PC if interested. Would love to see how we compare!
The degree of choice point-to-point in the skill tree is actually quite limited in most circumstances. There are obviously items, like thread of hope, intuitive leap, or inversion of choice items like unnatural instinct which change it slightly.
If the question is path optimization to utilizing these nodes, Path of Building already does a good job. If the question is "what single node will give me the most theoretical power." It also solves that.
That's actually the beauty of Path of Exile as a whole - the different systems works in combination to lead to an outcome. As an example, If you're a life stacking build, finding unique ways to get as many life/strength nodes as possible. That's your gear and your passive tree working in tandem.
Speaking about using AI to optimize characters - not just the skill tree - you'd need to build some pretty sophisticated tools which do not yet exist to make that happen. No AI alone would be able to do it.
Built something similar for E2E web testing recently. A few observations from running an agentic test harness in production:
1. The single biggest jump in test quality came from giving the agent BOTH source code analysis AND live browser snapshots, not either alone. With code-only the agent hallucinates selectors; with browser-only it misses project conventions. Two MCP servers feeding the same agent — one local file-read, one Playwright in-process — was the architecture that worked.
2. For the browser snapshot tool, returning the raw DOM ate tens of thousands of tokens per call and the agent struggled to navigate it. Swapping to accessibility-tree refs (e1, e2, ...) cut token usage by ~10x and made the agent reliably target the right elements.
3. We avoided Docker-based MCP servers in production (we run on ECS Fargate). The in-process SDK MCP pattern (create_sdk_mcp_server + @tool decorator) keeps the browser handle in scope of the tool definition, which let us attach page.on('console') listeners and have the agent read them via a separate tool. Hard to do that across stdio process boundaries.
For game testing specifically — your text-renderer detail is interesting because it sidesteps the visual-grounding problem (how does the agent verify what it's seeing?). Curious how you'd extend this to a 2D/3D rendered game where the screen state isn't easily textualized.
I hooked up an MCP server to a MUD and got some pretty amazing results, including Claude Code agents in separate windows chatting with each other and cooperating on building out a new section.
Do share, pray tell! Which MUD were you using? I've been poking around at MUD/MOO-adjacent capabilities and am having to hold the AI back from authoring it's own MUD/MOO capabilities instead of dorking with an existing server (likely that's full of security holes and complex bespoke startup+install configurations)
I'd like `mud_or_moo --state-dir ./tmp/some-mud` which stored most things as plain text or maybe SQLite if really necessary? The core of a MUD which was conceptually similar to a wiki-browser against markdown files (ie: room-001.md => exits => room-002.md) is what i'm angling towards, such that _editing and linking_ felt more comfortable and GUI to a human user.
Kindof landed on evennia as the seeming sweet spot in reaction to your comment.
I've walked an agent through Home Assistant => Wiki-per-room => Zork-Me! ...and it turns out that the actual Inform Zork engine is pretty terrible but it's fun to say "go north ; look table" (and eventually "turn on ha.light_001" ;-).
The "MUD/MOO" aspect is where it opens interesting options of actually curling out to the home assistant instance, and the just kindof wild fun of making a functional "quest" in the context of your own home (eg: solve a mystery? make dinner? battling another user for the TV remote? :-D)
Cool, I was thinking about this very thing. Was looking at CoffeeMud and wondered if I gave it a starting room and a clean slate if it could basically just build out a whole Mud from scratch.
I seem to remember the fatal flaw with harvester AI was that once a harvester was returning to the drop-off building, it would "claim" it, and so any other harvesters would just do a dance around the building until the the first harvester arrived. As a result, a harvester that was further away could block closer trucks if it just happened to fill sooner.
I hadn't really thought about trying to create a harness for agents to play the full game interactively. I'd love to explore this. If you don't mind, here are a few questions:
1) Correct to assume that I probably need a text-only harness even though my game is text-based already because I do make use of menu selections made via arrow-key-and-enter interactions?
2) Do you have prompt recommendations for the type of feedback you have found to be useful? I would guess in your case, the objectives of the game are more clear than an open-world RPG. What dead ends have you run into? Maybe a variety of approaches would be good? One agent tries to fight everything. Another focuses on gaining and completing as many quests as possible?
3) How bad is the token burn doing this? Any optimization strategies you've employed?
I'm building a physics-based 2d game involving slingshotting around planets. The realtime nature of it has meant that it's nearly impossible for the AI to test using a browser mcp. It'll take one screenshot, then another, and in the intervening time the player shot off the map and into deep space.
Instead I gave it both a code-level api to step forward and backward the physics engine and a browser-based, `window.game` api to do it via a browser mcp console. The former helps it work out physics bugs and the latter helps it test animation and UI issues.
It's still not great. I keep occasionally getting "I tested it and it works perfectly!" as I stare at the mcp'd browser with the player stuck clipped halfway into a planet. I think, if anything, I need to lean harder into this approach: building really solid tooling for the AI to inspect every aspect of state. I would kill for a turn-based game like OP XD
So we went down a rabbit hole and decided to do everything purely based on pixels and OS inputs.
We're currently only live for mobile but happy to give you early access to nunu ai for PC if interested. Would love to see how we compare!
The degree of choice point-to-point in the skill tree is actually quite limited in most circumstances. There are obviously items, like thread of hope, intuitive leap, or inversion of choice items like unnatural instinct which change it slightly.
If the question is path optimization to utilizing these nodes, Path of Building already does a good job. If the question is "what single node will give me the most theoretical power." It also solves that.
That's actually the beauty of Path of Exile as a whole - the different systems works in combination to lead to an outcome. As an example, If you're a life stacking build, finding unique ways to get as many life/strength nodes as possible. That's your gear and your passive tree working in tandem.
Speaking about using AI to optimize characters - not just the skill tree - you'd need to build some pretty sophisticated tools which do not yet exist to make that happen. No AI alone would be able to do it.
1. The single biggest jump in test quality came from giving the agent BOTH source code analysis AND live browser snapshots, not either alone. With code-only the agent hallucinates selectors; with browser-only it misses project conventions. Two MCP servers feeding the same agent — one local file-read, one Playwright in-process — was the architecture that worked.
2. For the browser snapshot tool, returning the raw DOM ate tens of thousands of tokens per call and the agent struggled to navigate it. Swapping to accessibility-tree refs (e1, e2, ...) cut token usage by ~10x and made the agent reliably target the right elements.
3. We avoided Docker-based MCP servers in production (we run on ECS Fargate). The in-process SDK MCP pattern (create_sdk_mcp_server + @tool decorator) keeps the browser handle in scope of the tool definition, which let us attach page.on('console') listeners and have the agent read them via a separate tool. Hard to do that across stdio process boundaries.
For game testing specifically — your text-renderer detail is interesting because it sidesteps the visual-grounding problem (how does the agent verify what it's seeing?). Curious how you'd extend this to a 2D/3D rendered game where the screen state isn't easily textualized.
I'd like `mud_or_moo --state-dir ./tmp/some-mud` which stored most things as plain text or maybe SQLite if really necessary? The core of a MUD which was conceptually similar to a wiki-browser against markdown files (ie: room-001.md => exits => room-002.md) is what i'm angling towards, such that _editing and linking_ felt more comfortable and GUI to a human user.
Once i had the core authorship mcp's working, claude itself created the whole world, including an initial tutorial sequence, combat, etc...
I've walked an agent through Home Assistant => Wiki-per-room => Zork-Me! ...and it turns out that the actual Inform Zork engine is pretty terrible but it's fun to say "go north ; look table" (and eventually "turn on ha.light_001" ;-).
The "MUD/MOO" aspect is where it opens interesting options of actually curling out to the home assistant instance, and the just kindof wild fun of making a functional "quest" in the context of your own home (eg: solve a mystery? make dinner? battling another user for the TV remote? :-D)