Enterprise AI platform for manufacturing defect detection and real-time quality inspection — built to run on the factory floor with evidence, traceability, and auditable decisions.
| App | URL | Platform |
|---|---|---|
| Marketing | https://intelfactor.ai | AWS Amplify |
| Dashboard | https://app.intelfactor.ai | AWS Amplify |
| API | https://api.intelfactor.ai | AWS Lambda |
- Setup Wizard: https://app.intelfactor.ai/edge/setup Upload SOP → extract rules → configure thresholds → deploy to edge (demo-safe).
- Live Stream: https://app.intelfactor.ai/edge/live View edge video stream (or demo placeholder if device unreachable).
- Ask Agent: https://app.intelfactor.ai/edge/ask Ask about defect counts, evidence, and SOP traceability (best-effort explanations).
Demo mode is designed to never block the UI. If backend/device calls fail, the experience falls back gracefully to simulation.
IMR is a GitOps-style versioning + deployment system for edge AI models.
When a Jetson detects a defect, auditors and QA teams need to answer: "Which exact model version made this decision?"
IMR provides:
- Immutable model versions — bundle: model + policy + thresholds + runtime config
- Server-computed SHA256 verification — never trust client hashes
- Desired/reported state reconciliation — cloud sets desired, device reports actual
- Atomic edge updates + rollback — symlink swap for zero-downtime deploys
- Evidence stamps — hash metadata attached to every defect record (audit trail)
DRAFT ──────▶ APPROVED ──────▶ ACTIVE ──────▶ RETIRED
│ │ │
│ Upload │ Server │ Set desired
│ artifacts │ computes │ state for
│ to S3 │ hashes │ devices
▼ ▼ ▼
Editable Immutable Devices
Locked pull updates
Every defect record includes:
{
"defect_id": "def-12345",
"decision": "FAIL",
"confidence": 0.92,
"defect_type": "chip",
"evidence_stamp": {
"inspection_id": "blade-inspect-001",
"version": "1.1.0",
"manifest_hash": "0b0fc8d9...",
"engine_hash": "ac27a1f3..."
}
}This enables complete traceability: any defect can be traced back to the exact model version that detected it.
Docs: docs/IMR_SMOKE_TEST.md
IntelFactor/
├── components/ # Marketing site React components
├── src/ # Marketing app entry + shared UI
├── pages/ # App routes (Vite/React)
├── backend/
│ ├── apps/
│ │ ├── api/ # FastAPI server (Lambda/container)
│ │ ├── edge/ # Jetson edge runtime (DeepStream/YOLO)
│ │ ├── web/ # Dashboard app (React)
│ │ └── stream_writer/ # Kafka → DynamoDB consumer
│ ├── packages/
│ │ ├── policy/ # Guardrails (PASS/FAIL/REVIEW)
│ │ ├── streaming/ # Kafka producer/consumer
│ │ └── imr/ # Inspection Model Registry
│ ├── prompts/ # Versioned prompts
│ └── scripts/ # Setup & automation scripts
├── infra/ # Terraform IaC
└── docs/ # Documentation (see docs/README.md)
- docs/README.md — documentation index
- CLAUDE.md — development guidelines for AI assistants
- backend/CLAUDE.md — backend-specific context
- docs/DEPLOY_PRODUCTION.md — production deployment guide
- docs/OAUTH_SETUP.md — Cognito / OAuth configuration
- docs/CONFLUENT_SETUP.md — Confluent Kafka setup and operations
- docs/KVS_LIVESTREAM_SETUP.md — KVS live streaming setup and troubleshooting
- docs/IMR_SMOKE_TEST.md — IMR guide
- infra/README.md — infrastructure architecture
npm install
npm run dev
# http://localhost:3000cd backend/apps/web
npm install
npm run dev
# http://localhost:5174cd backend
pip install -r apps/api/requirements.txt
uvicorn apps.api.main:app --reload --port 8000
# http://localhost:8000/healthcd backend
pip install "confluent-kafka[avro]>=2.3.0"
make kafka-smoke-test # Verify connectivity
make run-writer # Start consumer| Method | Endpoint | Description |
|---|---|---|
| GET | /health | Health check |
| POST | /api/v1/sop/parse | Parse SOP + extract inspection rules |
| POST | /api/v1/agent/deploy | Deploy agent config to edge device |
| POST | /api/v1/agent/ask | Ask agent about current line / defects |
| POST | /api/v1/inspect | Submit image for inspection |
| GET | /api/v1/metrics | Inspection metrics |
| GET | /api/v1/models | List available models |
| GET | /api/v1/events | Inspection events stream |
| GET | /api/v1/kvs/stream/hls-url | Get KVS HLS streaming URL |
Frontend demo mode falls back gracefully if these are unreachable.
| Model | Where | Use Case |
|---|---|---|
| YOLO (TensorRT) | Edge | Real-time detection (15-25ms) |
| Multimodal LLM (Azure/Bedrock) | Cloud | Evidence explanation, SOP Q&A |
Edge (Jetson) Cloud
┌──────────────┐ ┌──────────────────────────────────┐
│ TensorRT │ │ Confluent Cloud │
│ Inference │──Kafka──────▶ │ ┌─────────┐ ┌─────────────┐ │
│ │ │ │ Topics │───▶│ Flink SQL │ │
│ SQLite │ │ └─────────┘ └─────────────┘ │
│ Buffer │ (offline) │ │ │ │
└──────────────┘ │ ▼ ▼ │
│ ┌─────────┐ ┌─────────────┐ │
│ │DynamoDB │ │ Alerts │ │
│ └─────────┘ └─────────────┘ │
└──────────────────────────────────┘
Topics:
| Topic | Purpose |
|---|---|
if.inspection.events.v1 |
Edge inspection events |
if.metrics.aggregated.v1 |
Flink-computed metrics |
if.alerts.equipment.v1 |
Defect spike alerts |
if.device.config.v1 |
Cloud → edge config |
Commands:
cd backend
make kafka-setup # Create topics + smoke test
make kafka-smoke-test # Verify connectivity
make kafka-healthcheck # Full health check
make run-writer # Start Kafka → DynamoDB consumerReal-time video streaming from Jetson edge devices via AWS Kinesis Video Streams (KVS).
Jetson (GStreamer) ──▶ AWS KVS ──▶ HLS ──▶ Dashboard (HLS.js)
Endpoints:
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/v1/kvs/stream/hls-url?stream_name=X |
Get HLS URL (query param) |
| GET | /api/v1/kvs/stream/live/{stream_name} |
Get HLS URL (header-based) |
Allowed streams: intelfactor-line-1, intelfactor-line-2, intelfactor-line-3, intelfactor-factory-cam
Frontend routes:
/edge/live— Full livestream page with demo fallback/edge/live-stream— Clean KVS player view
Docs: docs/KVS_LIVESTREAM_SETUP.md
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/v1/imr/inspections/{id}/versions |
Create new version (draft) |
| POST | /api/v1/imr/inspections/{id}/versions/{v}/presign |
Get S3 upload URLs |
| POST | /api/v1/imr/inspections/{id}/versions/{v}/approve |
Approve + compute hashes |
| POST | /api/v1/imr/inspections/{id}/versions/{v}/activate |
Set as desired for devices |
| GET | /api/v1/imr/devices/{id}/desired |
Get desired state + download URLs |
| POST | /api/v1/imr/devices/{id}/report |
Report running state + hashes |
| GET | /api/v1/imr/inspections/{id}/audit |
Full audit trail |
The IMR updater service runs as a systemd daemon on Jetson edge devices, automatically polling for and deploying new inspection model versions.
# 1. Clone repository to Jetson
git clone https://github.com/tonesgainz/intelbase.git /opt/intelfactor
# 2. Set ownership
sudo chown -R $USER:$USER /opt/intelfactor
# 3. Create environment file (see docs/IMR_SMOKE_TEST.md for values)
nano /opt/intelfactor/.env.imr
# 4. Create bundles directory
mkdir -p /opt/intelfactor/bundles
# 5. Install Python dependencies
cd /opt/intelfactor/backend
pip3 install requests
# 6. Install and start systemd service
sudo cp /opt/intelfactor/backend/scripts/intelfactor-imr.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable intelfactor-imr
sudo systemctl start intelfactor-imr
# 7. Verify
systemctl status intelfactor-imr
sudo journalctl -u intelfactor-imr -fBundle Structure:
/opt/intelfactor/
├── active → bundles/blade-inspect-001/1.0.0 (symlink)
├── bundles/
│ └── blade-inspect-001/
│ └── 1.0.0/
│ ├── manifest.json
│ ├── model.engine
│ ├── classes.json
│ ├── thresholds.json
│ ├── policy.yaml
│ └── runtime.json
└── .env.imr
If Amplify fails with lock file errors, regenerate the lockfile:
rm -rf node_modules package-lock.json
npm install
git add package-lock.json
git commit -m "fix: regenerate package-lock.json"
git pushIf needed, add dependency overrides:
"overrides": {
"yaml": "^2.4.2"
}# Check service status
systemctl status intelfactor-imr
sudo journalctl -u intelfactor-imr -n 50
# Check connectivity
curl -v https://api.intelfactor.ai/health
# Manual test
cd /opt/intelfactor/backend
PYTHONPATH=/opt/intelfactor/backend python3 -m apps.edge.tools.imr_updaterProprietary — IntelFactor AI Ltd. See LICENSE for details.