You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These continue in parallel where they protect trust, performance, or operator posture, but they should not outrun Horizon A through C product legibility work:
Copy file name to clipboardExpand all lines: docs/STATUS.md
+7-6Lines changed: 7 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ Rebranding thesis (2026-02-23):
24
24
Current constraints are mostly hardening and consistency:
25
25
- security and identity behavior is converging but still not uniform across all controller families
26
26
- some UX/operator surfaces are functional but not yet keyboard-first or discoverability-first
27
-
- LLM flow now supports config-gated `OpenAI` and `Gemini` providers with deterministic `Mock` fallback for safe local/test posture; degraded provider responses are now structurally distinct (`messageType: "degraded"` + `degradedReason`) and the health endpoint supports opt-in probe verification (`?probe=true`); chat-to-proposal pipeline improvements delivered: `LlmIntentClassifier` now uses compiled regex patterns with word-distance matching, stemming/plurals, broader verb coverage, and negative context filtering for negations and other-tool questions (`#571`); parse failures now return structured hint payloads with closest-match suggestions and a frontend hint card with "try this instead" pre-fill (`#572`); dedicated classifier and chat-to-proposal integration test coverage added (`#577`); LLM-assisted instruction extraction now delivered (`#573`): OpenAI and Gemini providers request structured JSON output with a system prompt describing supported instruction patterns, parse the response into `LlmCompletionResult.Instructions`, and fall back to the static `LlmIntentClassifier` when structured parsing fails; `ChatService` iterates LLM-extracted instructions (supporting multiple proposals from a single message) and falls back to raw user message parsing when no instructions are extracted; Mock provider unchanged for deterministic test behavior; multi-instruction batch parsing now delivered (`#574`): `ParseBatchInstructionAsync` splits multiple natural-language instructions into individual planner calls, `ChatService` routes multi-instruction messages through batch parsing to generate multiple proposals from a single chat message; board-context LLM prompting now delivered (`#575`): `BoardContextBuilder` constructs bounded board context (columns, card titles, labels) and appends it to system prompts across OpenAI and Gemini providers via `LlmSystemPromptBuilder`; **remaining gap**: conversational refinement (`#576`) remains undelivered; analysis at `docs/analysis/2026-03-29_chat_nlp_proposal_gap.md`
27
+
- LLM flow now supports config-gated `OpenAI` and `Gemini` providers with deterministic `Mock` fallback for safe local/test posture; degraded provider responses are now structurally distinct (`messageType: "degraded"` + `degradedReason`) and the health endpoint supports opt-in probe verification (`?probe=true`); chat-to-proposal pipeline improvements delivered: `LlmIntentClassifier` now uses compiled regex patterns with word-distance matching, stemming/plurals, broader verb coverage, and negative context filtering for negations and other-tool questions (`#571`); parse failures now return structured hint payloads with closest-match suggestions and a frontend hint card with "try this instead" pre-fill (`#572`); dedicated classifier and chat-to-proposal integration test coverage added (`#577`); LLM-assisted instruction extraction now delivered (`#573`): OpenAI and Gemini providers request structured JSON output with a system prompt describing supported instruction patterns, parse the response into `LlmCompletionResult.Instructions`, and fall back to the static `LlmIntentClassifier` when structured parsing fails; `ChatService` iterates LLM-extracted instructions (supporting multiple proposals from a single message) and falls back to raw user message parsing when no instructions are extracted; Mock provider unchanged for deterministic test behavior; multi-instruction batch parsing now delivered (`#574`): `ParseBatchInstructionAsync` splits multiple natural-language instructions into individual planner calls, `ChatService` routes multi-instruction messages through batch parsing to generate multiple proposals from a single chat message; board-context LLM prompting now delivered (`#575`, expanded in `#617`): `BoardContextBuilder` constructs bounded board context (columns, card IDs, titles, labels) grouped per column and appends it to system prompts across OpenAI and Gemini providers via `LlmSystemPromptBuilder`; card IDs are included as first-8 hex chars so the LLM can generate `move card <id>` instructions; context budget increased to 4000 chars with single-query card fetch; **remaining gap**: conversational refinement (`#576`) remains undelivered; analysis at `docs/analysis/2026-03-29_chat_nlp_proposal_gap.md`
28
28
- managed-key shared-token abuse-control strategy is now explicitly seeded in `#235` to `#240` before broad external exposure
29
29
- testing-harness guardrail expansion from `#254` to `#260` is shipped; remaining work is normal follow-up hardening rather than the original wave
30
30
- MVP dogfooding flow now supports canonical checklist bootstrap in chat (proposal-first, board-scoped); broader template coverage remains future work
@@ -39,6 +39,7 @@ Current constraints are mostly hardening and consistency:
39
39
- fresh-registration manual test (2026-03-29) surfaced 2 P0 blockers and 16 additional bugs/observations spanning data isolation, board stability, dark-mode theming, chat utility, and UX polish; full findings at `docs/analysis/2026-03-29_manual_testing_consolidated_findings.md`; P0 blockers: queue data not scoped to the authenticated user (`#508`), board auto-switching on multi-board accounts (`#509`); P1 issues tracked in `#510` through `#515`; P2/P3 in `#516` through `#524`; **no external user onboarding should occur until `#508` is resolved**; two bugs from this session now resolved: activity audit trail not recording board mutations (`#521`, fixed — audit logging wired for all board/card/column/label mutations with `SafeLogAsync` resilience wrapper), archive board 30-second freeze (`#519`, fixed — navigation before reactive state teardown prevents cascading re-renders)
40
40
- platform expansion strategy (2026-03-29) now covers four strategic pillars: market adoption (`#544`), packaging/distribution (`#532`), cloud/collaboration (`#537`), and mobile platform (`#540`); master strategy tracker at `#531`; strategy documents at `docs/strategy/`; release versioning plan: `v0.1.0` (self-contained exe) → `v0.2.0` (hosted cloud) → `v0.3.0` (PWA/mobile) → `v0.4.0` (collaboration) → `v0.5.0` (platform maturity) → `v1.0.0` (GA)
41
41
- hands-on UX testing (2026-03-31) surfaced 8 areas of feedback spanning review, inbox, today, home, board, notifications, and LLM chat; 2 P1 issues (capture triage fails on natural-language text `#614`, chat response truncation showing raw JSON `#616`), 11 P2 usability/visual-coherence issues (`#611`–`#613`, `#615`, `#617`, `#620`, `#622`, `#624`–`#627`), and 4 P3 polish/strategic items (`#618`–`#619`, `#621`, `#623`); tracker at `#628`; full analysis at `docs/analysis/2026-03-31_manual_testing_ux_feedback.md`; cross-cutting themes: progressive disclosure needed across review/today/notifications, semantic color vocabulary for status tags, capture pipeline needs LLM-assisted extraction, and chat needs tool-calling/function-calling architecture; LLM tool-calling spike (`#618`) and MCP server spike (`#619`) seeded for strategic planning
42
+
- UX feedback wave 1 delivered (2026-03-31): 6 of 17 issues from `#628` resolved — sidebar footer pinned (`#623`), card drag layout shift eliminated (`#621`), starter-pack modal migrated to design tokens (`#612`), capture triage error messages surfaced with retry hint (`#615`), board context expanded with card IDs for LLM chat (`#617`, N+1 query fixed), review proposal cards now use collapsible detail sections with risk color-coding and keyboard-accessible links dropdown (`#626`); remaining open: 2 P1 (`#614`, `#616`), 5 P2 (`#613`, `#620`, `#624`, `#625`, `#627`), 4 P3 (`#618`, `#619`)
42
43
43
44
Target experience metrics for the capture direction:
44
45
- capture action to saved artifact should feel under 10 seconds in normal use
@@ -710,12 +711,12 @@ Command:
710
711
-`dotnet test backend/Taskdeck.sln -c Release -m:1`
711
712
712
713
Result:
713
-
- Domain: 306/306 passing
714
-
- Application: 943/943 passing
715
-
- API integration: 407/407 passing
714
+
- Domain: 357/357 passing
715
+
- Application: 1167/1167 passing
716
+
- API integration: 413/413 passing
716
717
- CLI contract: 4/4 passing
717
718
- Architecture boundaries: 8/8 passing
718
-
- Backend Total: 1668/1668 passing
719
+
- Backend Total: 1949/1949 passing
719
720
720
721
### Frontend Unit + Build (Executed)
721
722
@@ -726,7 +727,7 @@ Commands:
726
727
-`cd frontend/taskdeck-web && npm run build`
727
728
728
729
Result:
729
-
- Frontend unit: 1174/1174 passing (123 test files)
730
+
- Frontend unit: 1444/1444 passing (132 test files)
0 commit comments