-
Notifications
You must be signed in to change notification settings - Fork 91
Bug: work_server.py priority limit check overcounts — guesses priority from title instead of reading section header, and includes stale week_priorities tasks #41
Description
Summary
Two bugs in core/mcp/work_server.py cause the P2 priority limit to report as exceeded when the actual open task count is well within limits. Discovered during normal task creation — the MCP rejected a new P2 task with `Priority limit exceeded for P2` (reporting 13/10) when a direct read of `03-Tasks/Tasks.md` showed only ~2 genuinely open P2 tasks.
Bug 1 — `parse_tasks_file()` guesses priority from task title instead of reading the section header
Location: `parse_tasks_file()` (~line 2079)
What happens: The function correctly tracks `current_section` (e.g. `P2 - Normal (max 10)`) as it parses the file, but then ignores it and calls `guess_priority(clean_title)` to assign priority. `guess_priority()` defaults to `P2` for any task title that doesn't contain magic keywords like `urgent`, `important`, or `someday`. This means tasks in the P1 section get labelled P2 if their title doesn't match a keyword — inflating the P2 count and deflating P1.
Example: A task in the `## P1 - Important (max 5)` section titled `"Meeting with Nina Carpanini"` gets assigned priority P2 because the title contains no P1 keywords.
Fix applied locally:
Added `_priority_from_section()` helper and set priority from section header first, falling back to `guess_priority()` only when the section doesn't map to a known priority:
```python
def _priority_from_section(section: str) -> Optional[str]:
if not section:
return None
s = section.upper()
if s.startswith('P0') or 'URGENT' in s:
return 'P0'
if s.startswith('P1') or 'IMPORTANT' in s:
return 'P1'
if s.startswith('P2') or 'NORMAL' in s:
return 'P2'
if s.startswith('P3') or 'BACKLOG' in s:
return 'P3'
return None
In the parsing loop:
priority = current_section_priority or guess_priority(clean_title)
```
Bug 2 — `create_task` priority limit check includes tasks from `week_priorities` source
Location: `create_task` handler (~line 3541), `get_all_tasks()` (~line 2084)
What happens: `get_all_tasks()` reads from both `03-Tasks/Tasks.md` and `Inbox/Week Priorities.md`. The limit check in `create_task` uses all tasks from both sources. The week priorities file contains planning artefacts from prior weeks that are not properly marked `[x]` — so they appear as open tasks and count against the limit even though they are stale.
Impact: In a real vault with several weeks of accumulated week priorities, this can push the reported P2 count well above the actual backlog size.
Fix applied locally:
Filter the limit check to only count tasks from the canonical backlog (`source == 'tasks'`):
```python
Check priority limits — canonical task backlog only (not week priority planning files)
active_tasks = [
t for t in existing_tasks
if not t.get('completed') and t.get('source') == 'tasks'
]
priority_counts = Counter(t.get('priority', 'P2') for t in active_tasks)
```
Impact
- Priority limit enforcement is unreliable — blocks valid task creation and gives false signals about backlog health
- `list_tasks` returns incorrect priorities for tasks in named sections, making pillar/priority reporting misleading
- `get_system_status` priority distribution is similarly skewed
Environment
- Dex running locally, `work_server.py` via MCP
- Discovered on vault with ~6 weeks of accumulated week priorities in `Inbox/Week Priorities.md`
- Both fixes applied and verified locally — happy to submit a PR if useful