Header Ads

Show HN: Augur – A text RPG boss fight where the boss learns across encounters https://ift.tt/ya40pIM

Show HN: Augur – A text RPG boss fight where the boss learns across encounters I've been building Augur as a solo side project for the last month or so. It started as an experiment to see if I could make "boss fight" that learned from all comers, but still felt genuinely fair to play. The original plan was to build a simplistic jrpg style turned-based encounter engine, but I quickly pivoted to a text based interface, recalling my early experiences with Adventure and Zork. That naturally led to incorporating an LLM, and it turned into something I find pretty fun, so I'm sharing it. The core idea is simple: you play a text-based boss encounter against a character called the Architect, set in a strange library. You can fight, sneak, persuade, or try something I haven't thought of. Turns are mechanically resolved with d100 rolls, conditions track injuries instead of HP, and objects in the world have physical properties the LLM reasons about. The "engine" is property-based instead of tables of rules, and I've found that to yield some novel gameplay. The part I'm most interested in exploring is the learning. The Architect builds impressions from what it actually perceived during an encounter, stores them as vector embeddings, and retrieves relevant ones at the start of future encounters. It's lossy on purpose — more like human memory than a database lookup. If a tactic keeps working, the Architect starts recognizing the pattern. If you sneak past undetected, it remembers losing but not how. The technical foundation for all of this is a dual-LLM turn loop. Each turn makes two model calls: an engine model that sees full game state and resolves mechanics, then an architect model that only receives what it has actually perceived (line of sight, noise, zone proximity). The "information asymmetry" is structural and deliberate — the architect model literally cannot access state the engine doesn't pass through the perception filter. I tried the single-LLM approach first and it didn't work. No matter how carefully you prompt a model to "forget" information sitting in its context window, it leaks. Not to mention the Architect had the habit of adopting God Mode. So splitting the roles made the whole thing feel honest in a way prompt engineering alone couldn't. This is my first HN post, and this is a real launch on modest infrastructure (single Fly.io instance, small Supabase project), so if it gets any traffic I might hit some rough edges. There's a free trial funded by a community pool, or you can grab credits for $5/$10 if you want to keep going. It's best experienced in a full desktop browser, but it's passable on the two mobile devices I've tested it on. Playable here: https://ift.tt/bfjKV8e I'm happy to go deeper on any of the internals — turn flow, perception gating, memory extraction, cost model, whatever is interesting. https://ift.tt/bfjKV8e March 4, 2026 at 02:20AM

No comments

Powered by Blogger.