Stop manually typing parts into Tayda. Paste your list, subtract your inventory, and print your build docs.
In circuit design, a "Star Ground" is the single reference point where all signal paths converge to eliminate noise.
In manufacturing, this application serves the same function: it is the Single Source of Truth for your component inventory, eliminating the "noise" of disorganized BOMs, duplicate orders, and inventory drift.
Star Ground is a full-stack logistics engine for electronics. It ingests messy component lists (Text, CSV, PDF), normalizes the data, and generates a complete manufacturing bundle.
π Try the Live App
A robust Python tool designed to automate the chaotic process of sourcing and assembling electronics.
It aggregates data from inconsistent Bill of Materials (BOM) sources, performs statistical verification to ensure data integrity, and applies "Nerd Economics" (heuristic safety buffering) to generate smart purchasing lists.
v2.0.0 introduces a lossless data structure that tracks component provenance across multiple projects simultaneously, enabling batch-building logistics (e.g., "Build 2x Big Muffs and 1x Tube Screamer").
Beyond parsing, the engine functions as a domain expert system. It utilizes a lookup table of heuristic substitutions to suggest component upgrades based on audio engineering best practices.
-
Op-Amps: Detects generic chips and suggests Hi-Fi alternatives (e.g.,
TL072βOPA2134for lower noise floor). - Fuzz Logic: Automatically detects "Fuzz" topologies and injects Germanium transistors with specific "Positive Ground" warnings.
-
Texture Generation: Maps clipping diodes to their sonic equivalents based on Forward Voltage (
$V_f$ ) and Reverse Recovery Time ($t_{rr}$ ).
Ordering parts for multiple analog circuits is error-prone.
- Format Inconsistency: Every BOM uses different spacing, tabs, and naming conventions.
- Inventory Risk: Buying parts you already own (waste) or forgetting a $0.01 resistor (shipping delay).
- Assembly Chaos: Mixing up parts between three different projects on the same workbench.
This tool treats BOM parsing as a data reduction problem. It doesn't just read lines; it verifies them against a stateful inventory model.
- Multi-Format Ingestion:
- PDF Parsing: Extracts tables from PedalPCB build docs using visual layout analysis (hybrid grid/text strategy).
- Presets Library: Hierarchical selection of standard circuits (e.g., "Parentheses Fuzz - PedalPCB").
- URL Ingestion: Fetch BOMs directly from GitHub or raw text links.
- Inventory Logistics (Net Needs):
- Upload your current stock CSV.
- The engine calculates
Net Need = max(0, BOM_Qty - Stock_Qty). - Safety buffers are only applied to the deficit, preventing over-ordering.
- Manufacturing Outputs:
- Field Manuals: Generates Z-height sorted printable PDF checklists (Resistors β Sockets β Caps) for streamlined assembly.
- Sticker Sheets: Generates Avery 5160 labels with condensed references (e.g.,
R1-R4) for part binning. - Master Bundle: Downloads a single ZIP containing all source docs, shopping lists, and manual PDFs.
- Smart Normalization:
- Expands ranges automatically (
R1-R5βR1, R2, R3, R4, R5). - Detects potentiometers by value (
B100k) even if labeled non-standardly.
- Expands ranges automatically (
This project was built to solve a specific reliability problem. Here is the reasoning behind the architectural choices:
While it might be easier to pass BOMs to an LLM, non-deterministic outputs are unacceptable for procurement. A hallucinated quantity results in a failed hardware build.
- Decision: I implemented a deterministic Regex parser with a "Hybrid Strategy" for PDFs (Table Extraction + Regex Fallback) to ensure 100% repeatability.
v1.0.0 used simple counters. v2.0.0 implements a PartData TypedDict that tracks specific references per source.
- The Benefit: We can merge 3 different projects into one master list, but still generate individual "Field Manuals" for each project because the provenance of every
R1is preserved.
To manage the fragility of PDF parsing, the test suite uses Snapshot Testing.
- The Mechanism: The parser runs against a library of "Golden Master" PDFs. The output is compared against stored JSON snapshots. Any deviation in parsing logic triggers a regression failure, ensuring that supporting a new PDF format doesn't break support for older ones.
In hardware prototyping, the cost of downtime exceeds the cost of inventory.
- Decision: I implemented a Yield Management Algorithm that adjusts purchase quantities based on component risk vs. cost.
- High Risk / Low Cost: Resistors get a +10 buffer.
- Critical Silicon: ICs get a +1 buffer (socketing protection).
- Low Risk / High Cost: Potentiometers and Switches get a zero buffer.
- Python 3.13 - Core language (Strictly typed & pinned)
- uv - Ultra-fast dependency management and locking
- Streamlit - Interactive web interface and state management
- pdfplumber - PDF table extraction and layout analysis
- fpdf2 - Programmatic PDF generation
- Docker - Containerized runtime environment
- GitHub Actions - Continuous Integration enforcing strict quality gates:
- Linting: Ruff
- Type Safety: Mypy
- Unit Testing: Pytest
- Delivery: Auto-publishes Docker images to GHCR on release
- Environment: Ubuntu Latest
.
βββ app.py <-- Interface: Streamlit Web App
βββ assets/ <-- Static assets (images, demos)
βββ Dockerfile <-- Container configuration
βββ examples/ <-- Output: Sample generated artifacts
β βββ Pedal_Build_Pack_Complete/
β βββ Field Manuals/
β βββ Sticker Sheets/
β βββ Source Documents/
β βββ Pedal Shopping List.csv
β βββ My Inventory Updated.csv
βββ raw_boms/ <-- Input: Source files for the Presets Library
β βββ pedalpcb/
β βββ tayda/
βββ src/ <-- Application Core
β βββ bom_lib.py <-- Logic: Regex engine & buying rules
β βββ constants.py <-- Data: Static lookups and regex patterns
β βββ exporters.py <-- Logic: CSV/Excel generation
β βββ feedback.py <-- Logic: Google Sheets API integration
β βββ pdf_generator.py <-- Output: Field Manuals & Sticker Sheets
β βββ presets.py <-- Data: Library of known pedal circuits
βββ tests/ <-- QA Suite
β βββ samples/ <-- Real-world PDF/Text inputs for regression
β βββ snapshots/ <-- Golden Master JSONs for PDF testing
βββ tools/ <-- Developer Utilities
β βββ generate_presets.py
βββ CONTRIBUTING.md <-- Dev guide
βββ ROADMAP.md <-- Technical architectural plans
βββ pyproject.toml <-- Project metadata & tool config (Ruff/Mypy/Pytest)
βββ uv.lock <-- Exact dependency tree (Deterministic builds)
βββ requirements.txt <-- Python dependencies
You can pull the pre-built image directly from the GitHub Container Registry without building it yourself.
# Run latest stable release
docker run -p 8501:8501 ghcr.io/jacksonfergusondev/star-ground:latestOr build from source:
docker build -t star-ground .
docker run -p 8501:8501 star-groundThis project uses uv for dependency management.
# 1. Clone & Enter
git clone https://github.com/JacksonFergusonDev/star-ground.git
cd star-ground
# 2. Install Dependencies (Creates virtualenv automatically)
uv sync
# 3. Run App
uv run streamlit run app.pyWe are aggressively moving from a simple regex script to a context-aware physics engine.
Key Upcoming Initiatives:
- Architecture: Migrating to a Strategy Pattern and Context-Free Grammars for parsing.
- Intelligence: Topology inference (detecting "Fuzz" vs "Delay" circuits based on component clusters).
- Finance: Real-time pricing integration (Octopart/DigiKey) and volume arbitrage.
For the detailed technical breakdown and milestones, see ROADMAP.md.
We welcome contributions! Please see CONTRIBUTING.md for details on how to set up the dev environment, run the snapshot tests, and submit PRs.
Jackson Ferguson
- GitHub: @JacksonFergusonDev
- LinkedIn: Jackson Ferguson
- Email: jackson.ferguson0@gmail.com
This project is licensed under the MIT License - see the LICENSE file for details.
