LLMs playing balatro.
All interactions with the game are done via JSON-RPC API provided by balatrobot mod. LLMs are queried through Anthropic-compatible API, so it's also possible to use a proxy such as new-api.
Although it works, even smarter models with thinking like Claude Sonnet 4.5 do struggle to properly play the game and sometimes hallucinate about the current game state. It's probably possible to improve bot performance by experimenting with prompts, but not like I care about that for a silly project made in a few hours.
The project is not fully complete. You must start the game yourself for the bot to do anything, and if it loses the program just crashes. Restart the game and then start the bot again.
- Install balatrobot mod.
- Launch the game with the mod enabled using
uvx balatrobot serve. - Start a new game and proceed to the ante selection screen.
- Provide all environment variables and run
cargo run --release. - Watch it play!
Required:
API_KEY- API key used for authentication.API_URL- Base URL for Anthropic-compatible API.
Optional:
-
MODEL- Model name to use for querying the API. Defaults toclaude-sonnet-4-5-20250929, you probably want to use change that due to configuration differences. -
MAX_TOKENS- Maximum number of tokens to generate for each response. Defaults to4096, not sure how it impacts model gameplay performance. -
PROMPT_PATH- Path to the file containing the prompt. Defaults to$CARGO_MANIFEST_DIR/prompt.txtwhere the default prompt is stored, but you can provide your own. -
BALATROBOT_ENDPOINT- Endpoint for balatrobot API. Defaults tohttp://localhost:12346, which is the default port for balatrobot. No real reason to change it in most cases.
Distributed under the The Unlicense.