Advanced System Information Tool for Local LLM Usage
Show detailed hardware specs optimized for running local AI models
- โ CPU: Model, cores, threads, frequency, temperature, usage
- โ GPU: NVIDIA (nvidia-smi), AMD (rocm-smi), Intel Arc detection
- โ VRAM: Total, used, and available video memory
- โ RAM: Physical memory and swap information
- โ Storage: Disk type (NVMe/SSD/HDD), capacity, speed benchmarks
- โ Battery: Charge level, power status, time remaining (laptops)
- โ Apple Silicon: M1/M2/M3/M4 detection with unified memory
- ๐ค Model Recommendations: Personalized suggestions based on your hardware
- ๐ Quantization Guide: GGUF formats explained (Q2_K through Q8_0)
- ๐ Backend Comparison: Ollama, llama.cpp, vLLM, ExLlamaV2, LM Studio
- โก Performance Estimates: Token/s predictions for different model sizes
- ๐ก Optimization Tips: Specific advice for your system configuration
- ๐ Color-coded Output: Easy to read with semantic colors
- ๐ Progress Bars: Visual representation of usage and capacity
- ๐ง Configurable: Customize colors, emoji, detail level
- ๐ฑ Responsive: Adapts to terminal width
- ๐ค Export Formats: JSON, YAML, Markdown
- ๐งช Unit Tests: Comprehensive test coverage
- ๐ Modular Design: Easy to extend and customize
- ๐ Type Hints: Full type annotations
- ๐ Verbose Mode: Detailed logging for debugging
# Clone the repository
git clone https://github.com/HFerrahoglu/llm-neofetch-plus.git
cd llm-neofetch-plus
# Install dependencies
pip install -r requirements.txt
# Run directly
python llm_neofetch.py
# Or install globally
pip install -e .
llm-neofetchpip install llm-neofetch-plus
llm-neofetch# Normal output (default)
llm-neofetch
# Minimal output
llm-neofetch -d 1
# Detailed output with all features
llm-neofetch -d 3
# Interactive mode (choose detail level)
llm-neofetch -i# Run disk benchmark (takes ~10 seconds)
llm-neofetch -b
# Export to different formats
llm-neofetch --export report.json # JSON format
llm-neofetch --export report.yaml # YAML format
llm-neofetch --export report.md # Markdown format
# Verbose logging for debugging
llm-neofetch -v
# Custom config file
llm-neofetch --config /path/to/config.yaml
# Combine options
llm-neofetch -d 3 -b --export full_report.jsonโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โก LLM โข NEOFETCH ++ โก โ
โ Advanced System Info for Local LLM Usage โ
โ v1.0.0 โข 2026 Edition โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ป System Information
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
OS Linux-6.5.0-1-amd64-x86_64-with-glibc2.38
Kernel 6.5.0 (x86_64)
Uptime 2d 14h 32m
Python 3.11.5
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ง CPU
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Model AMD Ryzen 9 7950X 16-Core Processor
Cores 16 physical / 32 threads
Frequency 4200 MHz
Usage [โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ] 35.2%
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ฎ GPU
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ข NVIDIA GeForce RTX 4090
VRAM: 24.0 GB total
[โโโโโโโโโโโโโโโโโโโโโโโโโ] 12.4/24.0 GB
Usage [โโโโโโโโโโโโโโโโโโโโโโโโโ] 20.0%
Temp: 58ยฐC
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ฏ Personalized Model Recommendations
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โธ Extra Large Models (70-72B)
โข Llama 3.1 70B
โข Qwen2.5 72B
โธ Large Models (30-34B)
โข Llama 3.1 33B
โข Qwen2.5 32B
LLM-Neofetch++ uses a YAML configuration file. By default, it looks for:
./config/config.yaml(in the project directory)~/.config/llm-neofetch/config.yaml/etc/llm-neofetch/config.yaml
# UI Settings
ui:
box_width: 76
use_emoji: true
show_progress_bars: true
compact_mode: false
# Color Theme
colors:
primary: "\033[1;34m" # Blue
success: "\033[1;32m" # Green
warning: "\033[1;33m" # Yellow
danger: "\033[1;31m" # Red
# Performance Thresholds
thresholds:
vram:
excellent: 24 # GB
good: 12
moderate: 8llm-neofetch-plus/
โโโ llm_neofetch.py # Main application
โโโ src/
โ โโโ detectors.py # Hardware detection modules
โ โโโ ui.py # UI rendering and formatting
โโโ config/
โ โโโ config.yaml # Configuration file
โโโ tests/
โ โโโ test_all.py # Unit tests
โโโ requirements.txt # Python dependencies
โโโ setup.py # Package setup
โโโ README.md # This file
# Run all tests
python tests/test_all.py
# Run with pytest (if installed)
pytest tests/
# Run with coverage
pytest --cov=src tests/Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Quickly assess if your hardware can run specific models
- Get token/s estimates before downloading large models
- Understand which quantization format to use
- Optimize your LLM stack configuration
- Monitor system resources for AI workloads
- Export reports for documentation
- Benchmark storage performance for model loading
- Track GPU utilization and temperatures
- Document hardware specs in papers
- Compare performance across different systems
- Generate reproducible system reports
- Share hardware configurations
- Docker container support
- Web dashboard (optional)
- Historical tracking and graphs
- Cloud GPU detection (AWS, GCP, Azure)
- LLM benchmarking suite
- Automatic model download suggestions
- Integration with popular LLM frameworks
- Built with psutil for cross-platform system info
- Inspired by neofetch
- Community feedback from r/LocalLLaMA
MIT License - see LICENSE file for details
If you find this tool useful, please consider giving it a star โญ
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: fhamz4@proton.me
Made with โค๏ธ for the Local LLM Community
