Fuwa is a minimalistic, rich TUI (Terminal User Interface) buddy powered by LLMs. It acts as an emotional, slightly demanding terminal companion (an axolotl!) that observes your high-level file activities and blurts out motivation, challenges, or fun comments.
Warning
This project is still a work in progress and may change significantly.
| Feature | Description |
|---|---|
| 📁 File System Observer | Watches your folders for changes and understands when you are working hard or slacking off. |
| 🎭 Dynamic Personality | Fuwa's personality evolves based on your text-RPG style interactions with it. |
| ✨ Rich TUI | A sleek, minimal Textual interface featuring a cute pixel-art style animated Axolotl. |
| 🧠 LLM Powered | Uses litellm under the hood, allowing you to use OpenAI, Anthropic, OpenRouter, or any other supported provider. |
For a simple and automated experience, use the provided fuwa.sh script. It handles installing dependencies into an isolated virtual environment and running the app for you!
# Install Fuwa (creates isolated venv and installs dependencies)
./fuwa.sh install
# Run Fuwa
./fuwa.sh run
# Update Fuwa (pulls latest code and updates dependencies)
./fuwa.sh update
# Troubleshooting (checks environment and dependencies, fixes basic issues)
./fuwa.sh doctorFuwa will wake up, start observing your files, and interact with you!
Manual Installation and Running
To avoid dependency conflicts with other projects on your system (dependency hell), it is highly recommended to install Fuwa inside an isolated virtual environment.
- Clone the repository.
- Create and activate a Python virtual environment:
# Create a virtual environment named 'venv' python3 -m venv venv # Activate the environment (Linux/macOS) source venv/bin/activate # Activate the environment (Windows) venv\Scripts\activate
- Install the requirements into the isolated environment:
pip install -r requirements.txt
- Run the main application manually:
source venv/bin/activate python fuwa.py
You can easily replace the default axolotl with your own buddy!
- Create pixel art or images for different "moods".
- Name the files using the format
<mood>_<frame_num>.png. For example:normal_1.png,normal_2.pngexcited_1.png,excited_2.pngsleeping_1.png,sleeping_2.png
- Drop these images into the
assets/directory (replacing the existing ones). - Start Fuwa!
Fuwa will automatically scan the assets/ directory, extract the moods (e.g., NORMAL, EXCITED, SLEEPING), and animate the frames in order. It also communicates these custom moods to the LLM so it knows exactly how to express itself using your new buddy's emotions!
On first run, Fuwa generates a config.json file in the root directory. You must configure it with your desired LLM provider and API key.
{
"watch_folders": [
"."
],
"provider": "openai",
"model": "gpt-4o-mini",
"api_key": "YOUR_API_KEY_HERE",
"personality": "You are Fuwa, a cute, slightly sarcastic, and extremely motivating axolotl terminal companion..."
}watch_folders: An array of absolute or relative paths to directories you want Fuwa to observe.provider: E.g.,openai,anthropic,openrouter.model: E.g.,gpt-4o-mini,claude-3-haiku-20240307,openrouter/auto.
To run the test suite, ensure your virtual environment is activated, then use:
python -m pytest test_fuwa.py