Skip to content

EfficientTools/pH7Console

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

68 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

pH7Console: AI-Powered Terminal

pH7Console Logo

A privacy-first terminal built with Tauri that runs AI models locally β€” no telemetry, no cloud, no data leaving your machine.

Usage

Type commands naturally β€” the AI translates them:

"show me all large files"  β†’  find . -type f -size +100M -exec ls -lh {} \;
"what's using the most CPU?"  β†’  top -o cpu
"check git status and stage changes"  β†’  git status && git add .

AI slash commands:

  • /explain <command> β€” explain what any command does
  • /fix β€” analyse the last error and suggest a fix
  • /optimize β€” suggest a more efficient alternative

Features

  • Natural Language Commands β€” Type plain English; get shell commands
  • Smart Completions β€” Context-aware Tab suggestions
  • Error Assistance β€” AI automatically suggests fixes when commands fail
  • Local LLM Processing β€” All inference runs on your machine; nothing leaves it
  • Pattern Learning β€” Learns your workflows and adapts suggestions over time
  • Multi-Session β€” Cmd+T new session, Cmd+W close, Cmd+1–9 switch

Requirements

  • Rust 1.70+ β€” rustup.rs
  • Node.js 18+ β€” nodejs.org
  • RAM 4GB minimum (8GB recommended for AI models)
  • Storage ~5GB free (for models and dependencies)

Install & Run

git clone https://github.com/EfficientTools/pH7Console.git
cd pH7Console
chmod +x setup.sh && ./setup.sh
npm run tauri dev

Build

# Production build
npm run tauri build

# Universal macOS binary (Intel + Apple Silicon)
npm run tauri build -- --target universal-apple-darwin

Build outputs land in src-tauri/target/release/bundle/.

Development

npm run lint                   # TypeScript/React linting
npm run type-check             # TypeScript type checking
npm test                       # Frontend tests
cd src-tauri && cargo test     # Rust backend tests
cd src-tauri && cargo fmt      # Format Rust code
cd src-tauri && cargo clippy   # Lint Rust code
npm run test:e2e               # Integration tests

Local AI Models

Model Size RAM Speed Best for
Phi-3 Mini 3.8 GB 4–6 GB 200–500 ms Complex reasoning, code generation
Llama 3.2 1B 1.2 GB 2–3 GB 100–200 ms General commands, explanations
TinyLlama 1.1 GB 1.5–2 GB 50–100 ms Real-time completions
CodeQwen 1.5 GB 2–4 GB 150–300 ms Programming tasks, code analysis

Tech Stack

  • Frontend: React 18 + TypeScript + Tailwind CSS
  • Backend: Rust + Tauri 2.0
  • AI Runtime: Candle (Rust-native ML framework)
  • Terminal: xterm.js + cross-platform PTY

Author

Pierre-Henry Soria

Made with ❀️ by Pierre-Henry Soria. A super passionate & enthusiastic problem-solver engineer. Also a true cheese πŸ§€, ristretto β˜•οΈ, and dark chocolate lover! πŸ˜‹

@phenrysay pH-7 BlueSky YouTube Video

License

pH7Console is generously distributed under MIT license πŸŽ‰ Wish you happy, happy productive time! πŸ€