⚡ Wake up, nerds.
💘 Your CLI boyfriend or girlfriend chats with you — sharpen your social skills and discover your charm.
girlfriend-in-cli is a terminal-native romance simulator for the AI-native era: a weird, playful, and surprisingly sincere open-source project for vibe coders who spend too much time in the shell and not enough time practicing how to talk like a human.
This is not just a joke app.
It is built around a simple belief:
If developers are getting better at talking to models every day,
they should also get better at talking to people.
So yes:
- 💻 code in the terminal
- ⏳ waste less Slack time
- 🌙 survive lonely vibe-coding sessions
- 🫶 practice warmth, timing, empathy, and charm
- 🧠 build your own persona harness and talk to the energy you want
Don’t just grind code. Grind charm.
Screen.Recording.2026-04-14.at.3.03.11.AM.mov
- ✨ Why this exists
- 🛠️ What it does
- 🧭 The philosophy
- 🚀 Quick Start
- 📝 Release Notes
- 🎮 First run
- 🧠 Build your own persona harness
- ⌨️ Example commands
- 🌐 Remote mode
- 🎛️ In-app controls
- 🗂️ Sessions and export
- 🤝 Contributing
- ✅ Verification
- 🧩 Local-only ECC setup
- ⚖️ License
- 📣 One-line pitch
Modern builders already live in the terminal.
That means the terminal can become more than a place for:
- shells
- logs
- tests
- deployments
It can also become a place to practice:
- 💬 conversation flow
- ⏱️ emotional timing
- 👀 reading reactions
- 😏 flirting without sounding robotic
- 🧊 becoming a slightly less socially dead T-type developer
girlfriend-in-cli is for:
- lonely vibe coders
- terminal-first builders
- developers who want to feel a little more human while they work
- people who want to practice social instinct inside the same environment where they build
- 💬 Runs a terminal-only chat UI with Rich
- 🧑🤝🧑 Lets you chat with bundled boyfriend / girlfriend personas
- 💾 Supports saved sessions and resume flow
- 📩 Sends follow-up nudges if you leave the other side hanging
- 🔊 Supports voice output on macOS via
say - 🎙️ Supports voice input through a custom transcription command
- 🧪 Includes a Persona Studio for importing, editing, and creating personas
- 🔎 Can auto-generate personas from a name, link, or short prompt
- 🌐 Supports remote persona compilation and hosting workflows
- 📊 Exposes a live ECC trace/debug panel during runs
⚡ Wake up, nerds.
The point of this project is not “fake romance.”
The point is that the AI-native era should not produce developers who are only good at:
- prompting models
- shipping faster
- writing more code
It should also produce developers who are better at:
- empathy
- timing
- tone
- emotional calibration
- making other people feel understood
If you can train coding instincts in the terminal, maybe you can train social instincts there too.
brew tap NomaDamas/girlfriend-in-cli https://github.com/NomaDamas/brew-girlfriend-in-cli.git
brew install girlfriend-in-cli
mygfThis is the easiest path for most users.
Preferred launch commands:
mygf
girlfriend-in-cliNote: true bare
brew install girlfriend-in-clifor a fresh machine would require acceptance intohomebrew/core.
Right now the project ships through a public custom tap, which is the realistic path at this stage.
From the repository root:
uv sync --extra dev
uv run mygfIf uv is not installed yet:
brew install uvThat gives you:
- ✅ a local
.venv - ✅ editable project install from
pyproject.toml - ✅ the
mygfshortcut viauv run - ✅ the
girlfriend-in-clientrypoint - ✅ bundled persona discovery
If you prefer activating the environment manually:
source .venv/bin/activate
mygfIf you want to run tests:
uv run pytestIf you want a full smoke check:
uv run bash scripts/smoke.shOn startup, the app can check the latest stable GitHub Release and prompt before updating.
- ✅ checks releases, not random commits on
main - ✅ only updates when you explicitly say yes
- ✅ supports safe upgrade flows for release installs
- ✅ updates the Homebrew tap formula automatically on release publish
- ✅ designed to protect users from unstable in-between pushes
Recent release line:
v0.1.4.1— current stable release
Release pages:
- GitHub Releases: github.com/NomaDamas/girlfriend-in-cli/releases
If you want the latest packaged version:
brew tap NomaDamas/girlfriend-in-cli https://github.com/NomaDamas/brew-girlfriend-in-cli.git
brew upgrade girlfriend-in-cliIf you install from source, pull latest main and resync:
git pull origin main
uv sync --extra devLaunch the app:
mygfWhen the main menu opens, you can:
- start a new chat
- resume an old session
- open Persona Studio
- change provider / language / performance settings
If you want cloud model-backed chat, set an API key first in your shell or via the in-app Settings menu. For local inference, you can also use Ollama with a local endpoint + model.
Examples:
export OPENAI_API_KEY=your_key_here
mygfor
export ANTHROPIC_API_KEY=your_key_here
mygf --provider anthropicor
mygf --provider ollama --model llama3.2 --ollama-base-url http://127.0.0.1:11434/v1This project is not limited to bundled characters.
One of the real hooks is that you can build your own persona harness:
- import a persona from JSON
- create one manually in Persona Studio
- generate one from a name
- generate one from a link
- generate one from a short description / vibe
Auto-generation is routed through OpenAI / Anthropic only:
- OpenAI keeps live web-search grounding
- Anthropic uses model synthesis plus fetched URL text
- Ollama is for chat/runtime, not Persona Studio generation
The idea is simple:
You should be able to create the exact conversational energy you want to train against.
That means you can build:
- 😘 a flirty persona
- 🧊 a cold persona
- 😜 a playful persona
- 🔗 someone based on a public figure vibe
- 🧩 a totally custom character with your own style rules
In other words:
don’t just use personas — build your own persona harness.
Run the app:
mygfLaunch with a specific persona:
mygf --persona personas/wonyoung-idol.jsonUse Anthropic instead of OpenAI:
mygf --provider anthropicUse a local Ollama model:
mygf --provider ollama --model llama3.2Use a specific performance profile:
mygf --performance turbo
mygf --performance balanced
mygf --performance cinematicEnable voice output:
mygf --voice-outputResume a saved session:
mygf --resume sessions/your-session.jsonList bundled personas:
mygf --list-personasIf you want server-hosted personas and remote runtime generation:
girlfriend-generator \
--provider remote \
--server-base-url http://127.0.0.1:8787 \
--persona-id persona_123You can also compile a remote persona on the fly:
girlfriend-generator \
--provider remote \
--server-base-url http://127.0.0.1:8787 \
--compile-remote \
--display-name Yuna \
--relationship-mode girlfriend \
--context-notes "designer in Seongsu with dry humor" \
--context-link https://instagram.com/example \
--context-snippet "what are you doing"Remote mode is useful when you want:
- server-owned persona generation
- hosted runtime logic
- more dynamic persona compilation flows
while keeping the terminal UI, transcript export, and local interaction loop in this repo.
Enter— send messageEsc— clear draft / go back from empty draft/help— show command help/trace— toggle trace panel/status— print session state into chat/export— export transcript/voice on//voice off— toggle voice output/listen— run voice input command/back— return to main menu/quit— quit session
Sessions are exported as:
- JSON
- Markdown
under the local sessions/ directory by default.
That makes it easy to:
- review conversations
- resume old chats
- inspect persona behavior
- reuse transcripts for prompting or iteration
This repo is open to fork-and-PR contributions.
Typical flow:
- Fork the repo
- Create a branch from
main - Make a focused change
- Run tests
- Open a PR
Start here:
Good contribution targets:
- terminal UX polish
- persona tuning
- provider integrations
- docs and release workflow improvements
Run the test suite:
python3 -m pytestRun the smoke checks:
bash scripts/smoke.shThe smoke path verifies:
- package import / compilation
- entrypoints
- persona discovery
- transcript export
- repository-root path behavior
This repository vendors Everything Claude Code assets project-locally.
It uses:
AGENTS.md.codex/AGENTS.md.agents/skills/
It does not modify global Codex defaults or your ~/.codex setup unless you explicitly choose to do that yourself.
This project is licensed under Elastic License 2.0.
That means people can use, modify, and distribute the code, but they cannot turn it into a competing hosted/managed service.
This is intentional:
- the terminal client can stay public
- the monetizable server-side moat remains protected

