girlfriend-generator is a terminal-only romance simulation chat designed for short vibe-coding breaks. It keeps the product boundary fixed to the CLI, renders chat bubbles with Rich, simulates typing indicators, sends idle nudges when the conversation stalls, and exposes an ECC trace panel so you can see which local Everything Claude Code assets are driving the session.
- Runs a KakaoTalk-like chat flow in the terminal
- Loads detailed adult personas from
personas/*.json - Simulates assistant typing and follow-up nudges
- Supports irregular first-message initiative instead of only reactive replies
- Supports optional voice output on macOS via
say - Supports optional voice input through a user-supplied transcription command
- Shows a live
ECC Tracepanel with the active persona, provider, voice adapters, nudge and initiative timers, and local skill roots
This repository vendors Everything Claude Code assets project-locally:
AGENTS.md.codex/AGENTS.md.agents/skills/
It does not modify ~/.codex/config.toml or global Codex defaults.
Fast local path from the repository root:
bash scripts/bootstrap.sh
source .venv/bin/activate
girlfriend-generator --performance turbo
python3 -m pytest
bash scripts/smoke.shThat path keeps execution terminal-only, uses the low-latency local heuristic backend by default, and verifies the package entrypoint plus transcript/export behavior from the repository root.
Bootstrap a local editable environment from the repository root:
bash scripts/bootstrap.shThat script prefers python -m pip install --no-build-isolation -e ".[dev]", then falls back to python setup.py develop inside a --system-site-packages virtualenv when standards-based editable installs are blocked. The --no-build-isolation path avoids unnecessary network lookups for build dependencies and keeps setup local without touching ~/.codex. After installation it also sanity-checks the terminal entrypoints with girlfriend-generator --help, python -m girlfriend_generator --help, and bundled persona discovery.
On machines without local wheel support, the script skips straight to python setup.py develop, which is the verified offline-safe path in this repository.
Manual runtime install from the repository root:
python3 -m venv --system-site-packages .venv
source .venv/bin/activate
python -m pip install --no-build-isolation -e .If your environment does not have local wheel support, use the offline-safe fallback instead:
python setup.py developIf you want the local verification stack in the same environment, install the dev extra:
python -m pip install --no-build-isolation -e ".[dev]"If you choose a non-editable local install and still want persona lookup plus transcript export pinned to this repository, set:
export GIRLFRIEND_GENERATOR_ROOT=/absolute/path/to/girlfriend_generatorOnce installed, run the package entrypoint from anywhere:
girlfriend-generatorYou can also use the installed module entrypoint from the same environment:
python3 -m girlfriend_generator --persona personas/han-seo-jin-crush.jsonIf you want to pin a specific persona from outside the repository, pass an absolute path to the persona file.
Repo-relative persona paths such as --persona personas/han-seo-jin-crush.json are also resolved against the project root, so installed entrypoints keep working even when launched from another directory.
Interactive chat requires a real TTY on stdin and stdout. Non-interactive commands such as --help and --list-personas still work in pipes or scripts, but the live chat loop exits cleanly with a short error if you launch it outside a terminal.
Optional flags:
--provider heuristic|openai|anthropic--provider heuristic|openai|anthropic|remote--model <model-name>--performance turbo|balanced|cinematic--voice-output--voice-input-command "<command that prints a transcript to stdout>"--server-base-url--persona-id--session-dir <path>--no-export-on-exit--no-trace--list-personas
If you want server-owned persona quality and irregular initiation logic, run against the hosting service:
girlfriend-generator \
--provider remote \
--server-base-url http://127.0.0.1:8787 \
--persona-id persona_123In remote mode:
- persona packs come from the hosting server
- replies and first-message initiative come from the hosting server
- terminal rendering, typing UI, trace UI, voice hooks, and transcript export stay local in this repo
You can either:
- fetch by
--persona-id - fetch by
--persona-slug - compile a new remote persona on the fly with
--compile-remote
Example remote compile:
girlfriend-generator \
--provider remote \
--server-base-url http://127.0.0.1:8787 \
--compile-remote \
--display-name 유나 \
--relationship-mode girlfriend \
--context-notes "성수에서 일하는 디자이너 느낌" \
--context-link https://instagram.com/yuna.example \
--context-snippet "자기야 뭐 해?"This repository is intentionally scoped to the terminal-only CLI client. The install, smoke checks, package entrypoints, and docs are optimized around the local Rich chat loop, persona files, transcript export, voice hooks, and ECC trace visibility.
The moat features are intended to live in the separate hosting repository:
- link/context ingestion
- persona compilation
- server-side runtime response generation
- server-side irregular first-message scheduling
- memory APIs
- Type normally to compose a message
EntersendsEscclears the draft/helpshows in-app commands/tracetoggles the ECC trace panel/statusposts internal session state into the chat/exportwrites JSON and Markdown transcripts tosessions//voice onand/voice offtoggle voice output/listenruns the configured voice-input command and sends the transcript/quitexits
While the assistant is already typing, finishing a queued reply, or running /listen, the next user send is held until that assistant turn completes. This avoids overlapping replies and keeps typing indicators, idle nudges, and transcript ordering stable.
girlfriend-generator \
--persona /absolute/path/to/girlfriend_generator/personas/yu-na-girlfriend.json \
--voice-outputVoice output works out of the box on macOS through the built-in say command.
If say is unavailable, the CLI falls back to silent mode instead of failing the chat loop.
Voice input is intentionally adapter-based for now. Pass a command that records and transcribes speech, then prints the transcript to stdout. This keeps the base app lightweight while still making voice flows scriptable inside Codex or Claude Code workflows.
Default runtime is tuned for low latency:
--provider heuristic--performance turbo- local zero-network reply generation
- event-driven Rich redraws instead of constant frame refresh
If you switch to openai or anthropic, quality can improve, but latency will be worse than the local turbo path.
Use --performance balanced if you want slightly longer typing simulation without leaving the local heuristic path. Use --performance cinematic only when you explicitly want slower, more dramatic pacing.
By default the app exports each finished session to the repository-local sessions/ directory as:
- JSON for programmatic reuse
- Markdown for quick review or prompt reuse
Editable installs resolve the export target from the repository root rather than your current shell directory, so installed entrypoints still keep transcripts local to this project. Relative --session-dir values are resolved the same way. Repeated exports in the same second use collision-safe filenames instead of overwriting the previous transcript. If you are using a non-editable local install, set GIRLFRIEND_GENERATOR_ROOT to the repository path to keep the same behavior. You can also trigger export manually with /export.
For the fast repository-root test pass:
python3 -m pytestRun the full repository-root verification path:
bash scripts/smoke.shThe smoke script verifies:
- package compilation
pytestfrom the repository root- editable runtime install into a temporary virtualenv, using the offline-safe local path when
wheelis unavailable girlfriend-generator --helppython -m girlfriend_generator --help- bundled persona discovery from outside the repository through both entrypoints
- repo-relative persona path resolution from outside the repository
- repository-local transcript path resolution, including the explicit
GIRLFRIEND_GENERATOR_ROOToverride path - direct transcript export into the repository-local
sessions/directory - persona/session behavior through pytest coverage
This repository now includes a repo-local Ralph workflow setup:
- pinned anti-oscillation seed:
.codex/ralph-seed.yaml - current loop notes:
.codex/ralph-status.md - evidence capture:
scripts/ouroboros_capture_evidence.sh - Ralph launcher with ontology gate:
scripts/ouroboros_ralph.sh
Run the full repo-local Ralph path:
bash scripts/ouroboros_ralph.shWhat it does:
- checks
ouroboros status health - captures reproducible verification evidence under
artifacts/ouroboros/latest/ - scans changed paths for likely ontology drift
- launches an interview if instability is detected
- otherwise runs the pinned Ralph workflow sequentially
To force a re-interview even when the ontology looks stable:
FORCE_INTERVIEW=1 bash scripts/ouroboros_ralph.sh "Refine the ontology before the next execution"