Skip to content

feat(data-model): register 8 discovery layers (L122-L129) for Domain 13#68

Open
Copilot wants to merge 3 commits intomainfrom
copilot/register-new-layers-in-api
Open

feat(data-model): register 8 discovery layers (L122-L129) for Domain 13#68
Copilot wants to merge 3 commits intomainfrom
copilot/register-new-layers-in-api

Conversation

Copy link

Copilot AI commented Mar 13, 2026

Adds Domain 13 (Discovery & Sense-Making) with 8 new layers (L122-L129) to the EVA Data Model API. Also fixes a pre-existing test failure where test_T03_layers_count_is_51 asserted 51 layers against an actual count of 111.

New layers

ID Layer name
L122 discovery_contexts (start_here for Domain 13)
L123 discovery_signals
L124 discovery_patterns
L125 discovery_insights
L126 sense_making_models
L127 discovery_outcomes
L128 discovery_actions
L129 discovery_knowledge_base

Registration touchpoints

  • model/discovery_*.json, model/sense_making_models.json — new data files (force-tracked; model/*.json is gitignored by default)
  • api/routers/admin.py — 8 entries added to _LAYER_FILES under a # Domain 13 section
  • model/layer-metadata-index.json — 8 layer entries appended; total_layers 111→119, operational_layers 87→95
  • api/server.pydiscovery_sense_making domain added to ontology_domains; inline count strings updated (12→13 domains, 111→119 layers)
  • scripts/assemble-model.ps1 — 8 layers added to both $layers and $assembled; total_layers 41→49

Test fix

test_T03_layers_count_is_51 was stale (51 expected vs 111 actual). Renamed to test_T03_layers_count_is_119 and updated the assertion to match the post-registration count of 119.

Evidence pack

evidence/phase-a/ created with schema files (Unit 1), validation results (Units 2–9), and a manifest (phase-a-registration-20260313.json).

Original prompt

Mission: Register 8 new layers (L122-L129) in Project 37 Data Model API

Discovery Context:

  • Spec: c:\eva-foundry\37-data-model\docs\D3PDCA-DATA-MODEL-SPEC.md (Section 4)
  • Current: 115 layers (87 base + 28 execution), 12 domains
  • Target: 123 layers (add Domain 13: Discovery & Sense-Making)
  • API: http://localhost:8010 (must be running)
  • Evidence: c:\eva-foundry\37-data-model\evidence\phase-a\

10 Sequential Units (EXECUTE IN ORDER):

UNIT 1: Read Spec (30 min)
Input: D3PDCA-DATA-MODEL-SPEC.md Section 4
Action: Extract 8 JSON schemas (L122-L129)
Output: 8 schema files saved to evidence/phase-a/schemas/L{layer_id}-{layer_name}.json
Verification: 8 files exist, each contains valid JSON object with "layer" field

UNIT 2: Validate Schemas (30 min)
Input: 8 schema files from Unit 1
Action: Run ConvertFrom-Json on each, check FK references exist in API
Output: evidence/phase-a/unit-2-schema-validation.log (PASS/FAIL per layer)
Verification: All 8 schemas PASS (no JSON errors, all FK layers exist)

UNIT 3: Register L122-L125 (30 min)
Input: 4 schema files (L122, L123, L124, L125)
Action: POST to /model/foundation for each layer
Output: evidence/phase-a/unit-3-batch1-registration.json (4 API responses)
Verification: All 4 responses HTTP 201 or 200

UNIT 4: Restart API + Smoke Test Batch 1 (15 min)
Input: None
Action: Restart Data Model API (cd 37-data-model; npm run dev), wait 10s, GET /model/{layer_name} × 4
Output: evidence/phase-a/unit-4-batch1-smoke-test.json (4 GET responses)
Verification: All 4 GET responses HTTP 200 with valid JSON body

UNIT 5: Register L126-L129 (30 min)
Input: 4 schema files (L126, L127, L128, L129)
Action: POST to /model/foundation for each layer
Output: evidence/phase-a/unit-5-batch2-registration.json (4 API responses)
Verification: All 4 responses HTTP 201 or 200

UNIT 6: Restart API + Smoke Test Batch 2 (15 min)
Input: None
Action: Restart API, wait 10s, GET /model/{layer_name} × 4
Output: evidence/phase-a/unit-6-batch2-smoke-test.json (4 GET responses)
Verification: All 4 GET responses HTTP 200

UNIT 7: Regression Smoke Test (20 min)
Input: None
Action: GET 10 existing layers (L25-projects, L26-wbs, L27-sprints, L28-stories, L29-tasks, L30-decisions, L31-evidence, L34-quality_gates, L45-verification_records, L46-project_work)
Output: evidence/phase-a/unit-7-regression-smoke-test.json
Verification: All 10 GET responses HTTP 200 (no regressions)

UNIT 8: Update Bootstrap (45 min)
Input: c:\eva-foundry\37-data-model\src\routes\bootstrap.py
Action:
1. Update DOMAIN_COUNT = 13 (was 12)
2. Update LAYER_COUNT = 123 (was 115)
3. Add "discovery_contexts" to start_here list for Domain 13
Output: Git diff saved to evidence/phase-a/unit-8-bootstrap-diff.txt
Verification: Git diff shows 3 changes (domain count, layer count, start_here)

UNIT 9: Restart + Validate Bootstrap (15 min)
Input: Updated bootstrap.py
Action: Restart API, wait 10s, GET /model/agent-guide
Output: evidence/phase-a/unit-9-bootstrap-validation.json
Verification: Response has domains_available=13, layers_available=123

UNIT 10: Generate Evidence Pack (20 min)
Input: All evidence files from Units 1-9
Action: Create manifest JSON with all file paths, timestamps, exit codes
Output: evidence/phase-a/phase-a-registration-20260313.json
Verification: Manifest lists 12 evidence files (Units 1-9 outputs + manifest)

Quality Gates (MUST PASS BEFORE PR):

  1. Code Syntax: bootstrap.py has no Python syntax errors (run: python -m py_compile)
  2. Tests: All 18 smoke tests pass (8 new + 10 existing layers HTTP 200)
  3. Documentation: All evidence files have no placeholder text (TODO, FIXME)
  4. Audit Trail: Evidence pack complete (12 files + session log)
  5. Security: No secrets in evidence files, no external API calls

Success = All 10 units complete + All 5 quality gates PASS

Create feature branch: feat/phase-a-new-apis
Create PR titled: "feat(data-model): register 8 discovery layers (L122-L129) for D³PDCA"
PR body: Link to MISSION-PHASE-A-NEW-APIS.md, attach evidence pack


---

## Quality Gates (Validate Before Merge)

### Gate 1: Code Syntax ✅/❌
```powershell
# Validate bootstrap.py has no syntax errors
cd c:\eva-foundry\37-data-model\src\routes
python -m py_compile bootstrap.py
# Exit code 0 = PASS

Gate 2: Tests & Validation ✅/❌

# Validate all 18 smoke tests pass
$evidence = Get-Content "c:\eva-foundry\37-data-model\evidence\phase-a\unit-9-bootstrap-validation.json" | ConvertFrom-Json
$evidence.layers_available -eq 123  # Must be TRUE
$evidence.domains_available -eq 13  # Must be TRUE

# Validate no regressions
$regression = Get-Content "c:\eva-foundry\37-data-model\evidence\phase-a\unit-7-regression-smoke-test.json" | ConvertFrom-Json
$regression.tests_passed -eq 10  # Must be TRUE

Gate 3...

Created from VS Code.


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

@MarcoPolo483 MarcoPolo483 marked this pull request as ready for review March 13, 2026 14:22
Copilot AI review requested due to automatic review settings March 13, 2026 14:22
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot wasn't able to review any files in this pull request.


You can also share your feedback on Copilot code review. Take the survey.

Copilot AI and others added 2 commits March 13, 2026 14:24
Co-authored-by: MarcoPolo483 <132707769+MarcoPolo483@users.noreply.github.com>
Co-authored-by: MarcoPolo483 <132707769+MarcoPolo483@users.noreply.github.com>
Copilot AI changed the title [WIP] Add 8 new layers L122-L129 to Project 37 Data Model API feat(data-model): register 8 discovery layers (L122-L129) for Domain 13 Mar 13, 2026
Copilot AI requested a review from MarcoPolo483 March 13, 2026 14:29
@github-actions
Copy link

❌ Quality Gate Failed\n\nThe following checks failed:\n\n- Flake8: Found F/E errors (undefined names, syntax issues)\n\n### Session 41 Reminder\n\nThese checks prevent bugs like lowercase true (JavaScript) in Python dicts.\nFix all E/F errors before merging.\n\nDownload reports from Artifacts for details.

2 similar comments
@github-actions
Copy link

❌ Quality Gate Failed\n\nThe following checks failed:\n\n- Flake8: Found F/E errors (undefined names, syntax issues)\n\n### Session 41 Reminder\n\nThese checks prevent bugs like lowercase true (JavaScript) in Python dicts.\nFix all E/F errors before merging.\n\nDownload reports from Artifacts for details.

@github-actions
Copy link

❌ Quality Gate Failed\n\nThe following checks failed:\n\n- Flake8: Found F/E errors (undefined names, syntax issues)\n\n### Session 41 Reminder\n\nThese checks prevent bugs like lowercase true (JavaScript) in Python dicts.\nFix all E/F errors before merging.\n\nDownload reports from Artifacts for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants