BioID 蛋白组分析平台(当前为 Phase 5 可发布候选版),支持四类核心图形 API + 前端交互展示,并提供 PNG/SVG/CSV 下载与 config_rev 追溯。
已完成:
- 图形 API:
PCA、Correlation Heatmap、Volcano、GO/KEGG Enrichment - 前端交互:分组筛选、阈值滑杆、TopN 切换
- 下载能力:
PNG/SVG/CSV - artifact 追溯:下载响应或 payload 包含
config_rev与 artifact 元数据 - 回归测试 + 性能测试摘要
当前仓库可运行栈(docker-compose):
frontend:React + Viteapi:Node.js + Express(构建源:./services/api)postgres:配置与上传元数据redis:队列/缓存预留r-engine:R runtime(默认limma栈;可选安装clusterProfiler富集栈)minio:artifact 存储预留minio-init:启动时自动创建UPLOAD_BLOB_BUCKET
cd /Users/zhui/Desktop/cs/cloud-fullstack-docker
docker compose up -d --build访问:
- Frontend:
http://localhost:5173 - Backend Health:
http://localhost:4000/api/health
集成烟测:
scripts/compose-smoke.sh当前烟测包含:
/api/health/api/analysis+ artifact 下载POST /api/session+POST /api/uploadGET /api/upload/:id+/mapped-rows分页GET /api/session/:id/uploads会话上传列表DELETE /api/upload/:id单条删除与回查DELETE /api/session/:id/uploads批量删除与回查
如果只在本机裸跑 services/api(未启动 postgres),可跳过 health 检查:
SKIP_HEALTH=1 scripts/compose-smoke.sh如只做 compose 配置/构建校验(不跑 API 链路):
SKIP_API=1 scripts/compose-smoke.sh如只做运行态回归(跳过镜像重建):
SKIP_BUILD=1 scripts/compose-smoke.sh如需放宽上传存储模式断言(默认 blob):
EXPECT_STORAGE_MODE=db scripts/compose-smoke.sh如需启用 r-engine 富集重依赖(clusterProfiler + org.Hs.eg.db):
export R_ENGINE_INSTALL_ENRICHMENT_PACKAGES=1
docker compose up -d --build r-engine首次试运行建议先走最小闭环,优先验证服务健康 + API 链路:
cd /Users/zhui/Desktop/cs/cloud-fullstack-docker
# 1) 首次拉起(需要构建)
docker compose up -d --build
# 2) 检查六服务健康
docker compose --env-file .env.example ps
# 3) 运行态链路烟测(避免重复 build)
SKIP_BUILD=1 scripts/compose-smoke.sh若第 3 步 PASS,即可进入下一轮功能开发(上传解析 -> 预处理 -> 差异分析 -> 绘图下载)。
GET /api/analysis?config_rev=rev-0005
返回:
views.pca.dataviews.correlation.dataviews.volcano.dataviews.enrichment.data- 每个 view 含
downloads.csv/svg/png与artifact_meta
GET /api/artifacts/:id/download:下载CSV或SVG,响应头包含X-Artifact-MetaGET /api/artifacts/:id/png:返回 PNG 渲染 payload(SVG + metadata),由前端转为 PNG 下载GET /api/artifacts/:id/meta:查询 artifact 元数据
POST /api/session:创建会话,返回sessionIdPOST /api/upload:上传结果表并自动识别FragPipe / DIA-NN / MaxQuant与protein/peptide- 请求格式:
multipart/form-data - 字段:
file+sessionId(兼容session_id) - 响应包含摘要:
sampleCount、entityCount、availableColumns、warnings - 完整
mappedRows默认落地到后端 blob 存储(本地文件实现),响应返回storage.mode/key
- 请求格式:
GET /api/upload/:id:读取上传解析详情(摘要 + 预览 + 持久化后的统一 schema 行数)GET /api/upload/:id/mapped-rows?limit=200&offset=0:分页读取完整标准化行DELETE /api/upload/:id:删除上传记录并清理关联 blob(失败时返回 warning 并继续删除 DB 记录)GET /api/session/:id/uploads?limit=50&offset=0:按会话分页列出上传历史DELETE /api/session/:id/uploads:批量删除该会话下所有上传(含 blob 清理)
POST /api/analysis/run:从sessionId + uploadId触发异步分析运行- 支持
engine/de/enrichment/sampleGroups/config_tag - 返回
runId与statusUrl
- 支持
GET /api/analysis/run/:runId:轮询运行状态(running/succeeded/failed)- 终态返回
result.views(pca/correlation/volcano/enrichment) - 每个 view 包含
downloads.csv/svg/png/meta
- 终态返回
示例:
curl -X POST http://localhost:4000/api/session \
-H 'content-type: application/json' \
-d '{"name":"demo-round2"}'
curl -X POST http://localhost:4000/api/upload \
-F "sessionId=<your-session-id>" \
-F "file=@services/api/samples/fragpipe-protein.tsv"
curl http://localhost:4000/api/upload/<upload-id>
curl "http://localhost:4000/api/upload/<upload-id>/mapped-rows?limit=100&offset=0"
curl "http://localhost:4000/api/session/<session-id>/uploads?limit=20&offset=0"
curl -X DELETE "http://localhost:4000/api/upload/<upload-id>"
curl -X DELETE "http://localhost:4000/api/session/<session-id>/uploads"Blob 存储切换(MinIO/S3):
export UPLOAD_BLOB_BACKEND=s3
export UPLOAD_BLOB_BUCKET=cloud-fullstack-docker-artifacts
export UPLOAD_BLOB_ENDPOINT=http://localhost:9000
export UPLOAD_BLOB_REGION=us-east-1
export UPLOAD_BLOB_ACCESS_KEY_ID=minioadmin
export UPLOAD_BLOB_SECRET_ACCESS_KEY=minioadmin123
export UPLOAD_BLOB_FORCE_PATH_STYLE=true
export UPLOAD_BLOB_POLICY=private启用后,POST /api/upload 返回 storage.mode=blob 且 mappedRows 通过对象存储读写。
如果不使用对象存储,可切回本地文件模式:
export UPLOAD_BLOB_BACKEND=fs
export UPLOAD_BLOB_DIR=./services/api/reports- 同一
config_rev下,分析输出是确定性的(deterministic) - 下载文件名与 artifact metadata 绑定
config_rev POST /api/config与GET /api/config/:session_id提供config_hash与reproducibility_token
后端测试:
cd services/api
npm test
npm run test:summarynpm run test:summary 会执行:
- 回归测试:
scripts/regression.js - 性能测试:
scripts/performance.js - 生成摘要:
docs/round5-test-summary.md
前端构建验证:
cd services/frontend
npm install
npm run buildCI(GitHub Actions):
- Workflow:
.github/workflows/compose-smoke.yml - 执行内容:
cd services/api && npm testcd services/frontend && npm run builddocker buildx构建r-engine(cache-from/cache-to: type=gha)SKIP_API=1 SKIP_BUILD=1 scripts/compose-smoke.sh(配置烟测)
- Workflow:
.github/workflows/full-smoke-nightly.yml - 执行内容:
- 每日
03:00 UTC(以及手动触发)执行 full-stack compose 启动 - 通过
scripts/compose-up-retry.sh对compose up -d --build做重试 SKIP_BUILD=1 scripts/compose-smoke.sh(运行态 API 全链路)- 通过
scripts/collect-compose-logs.sh采集并上传compose-logsartifact(ps/logs/images/config/events) - 日志过大时自动裁剪(
services/events可分别配置阈值),并在manifest.txt记录大小与 SHA256 - job summary 自动附带
manifest摘要,便于直接在 Actions 页面定位 - 手动触发可选
install_enrichment=1,启用clusterProfiler/org.Hs.eg.db构建
- 每日
- Workflow:
.github/workflows/full-smoke-enrichment-weekly.yml - 执行内容:
- 每周
04:00 UTC(周日)执行 enrichment 栈 full-smoke(固定R_ENGINE_INSTALL_ENRICHMENT_PACKAGES=1) - 前置执行
cd services/api && npm ci && npm test - 前置执行
cd services/frontend && npm ci && npm run build - 同样使用
scripts/compose-up-retry.sh+scripts/collect-compose-logs.sh,artifact 名称为compose-logs-enrichment - 与 nightly 一样启用 workflow
concurrency+ job summary,避免并发抢占并提升可读性
- 每周
来自 docs/round5-test-summary.md(2026-02-21 最新一次):
- Regression: PASS
- Analysis API latency (local loopback):
p50=2.64ms,p95=4.11ms,max=7.02ms - Download API latency (local loopback):
p50=0.46ms,p95=0.90ms,max=1.13ms
发布门禁执行结果见:docs/round5-release-checklist.md
最新门禁状态(2026-02-21):Deployable PASS / Reproducible PASS / Rollback-ready PARTIAL
-
docker compose up -d --build成功,frontend/api/postgres/redis/r-engine/minio全部健康 -
GET /api/health返回ok=true - 前端四个图均可渲染,交互控件可用
- 四图均可下载
PNG/SVG/CSV - 下载文件名/响应元数据中可追溯
config_rev
- 相同
config_rev多次请求,核心数据一致 -
npm test全通过 -
npm run test:summary生成最新docs/round5-test-summary.md -
POST /api/config+GET /api/config/:session_id的config_hash/reproducibility_token一致
- 发布前打 Git tag(例如
release/round5) - 保留上一个稳定镜像 tag(
frontend/api) - 数据库变更可逆(当前 Round 5 无新增 destructive migration)
- 回滚流程演练:
- 切回上一 tag
-
docker compose up -d --build - 验证
/api/health与关键页面
审查与版本控制脚本:
scripts/review-gate.sh <round-name>:执行本地审查门禁(文档同步、compose 配置、脚本语法)。scripts/progress-monitor.sh [output-file]:生成当前开发进度快照(改动规模、测试状态、门禁、烟测、阻塞项),同时输出 JSON 版本与上次快照差异。scripts/vc-snapshot.sh "<commit-message>":创建快照提交并自动打 tag。scripts/vc-rollback.sh <commit-or-tag>:创建安全 rollback 分支(不会直接改写main)。
示例:
cd /Users/zhui/Desktop/cs/cloud-fullstack-docker
scripts/review-gate.sh round-2
scripts/progress-monitor.sh
scripts/vc-snapshot.sh "feat: round-2 upload parser"
scripts/vc-rollback.sh baseline-v1默认输出:
docs/PROGRESS_STATUS.mddocs/PROGRESS_STATUS.jsondocs/PROGRESS_TREND.mddocs/PROGRESS_METRICS.promdocs/PROGRESS_METRICS.json
可通过环境变量控制执行项:
# 跳过测试,仅看工作区+门禁+运行态
RUN_TESTS=0 scripts/progress-monitor.sh
# 跳过 compose 烟测
RUN_SMOKE=0 scripts/progress-monitor.sh
# 指定 JSON 输出路径
JSON_OUTPUT_FILE=docs/progress/latest.json scripts/progress-monitor.sh
# 指定用于差异对比的上一份 JSON
PREV_JSON_FILE=docs/progress/previous.json scripts/progress-monitor.sh
# 严格模式:存在 blocker 时返回非 0(可做 CI gate)
STRICT_MODE=1 scripts/progress-monitor.sh
# 历史归档与趋势输出
SAVE_HISTORY=1 HISTORY_DIR=docs/progress_history TREND_OUTPUT_FILE=docs/PROGRESS_TREND.md scripts/progress-monitor.sh
# 历史保留 14 天,并设置 blocker 热点 TopN
HISTORY_RETENTION_DAYS=14 BLOCKER_TOP_N=10 scripts/progress-monitor.sh
# 历史最多保留 200 个快照(按时间戳滚动)
HISTORY_RETENTION_COUNT=200 scripts/progress-monitor.sh
# 指定 Prometheus 指标输出文件
METRICS_OUTPUT_FILE=docs/PROGRESS_METRICS.prom scripts/progress-monitor.sh
# 指定 JSON 指标输出文件
METRICS_JSON_FILE=docs/PROGRESS_METRICS.json scripts/progress-monitor.shCI 定时快照:.github/workflows/progress-monitor.yml(支持 workflow_dispatch 与 nightly cron)。
详细说明见 docs/version-control.md。
- FragPipe-Analyst: https://fragpipe-analyst.nesvilab.org/