Skip to content

Security: QueryDecomposer prompt injection + missing error handling #2169

@mrveiss

Description

@mrveiss

Problem

`services/neural_mesh/query_decomposer.py` (PR #2147) has two security/reliability gaps:

1. Prompt injection (HIGH)

User query is interpolated directly into the LLM decomposition prompt:
```python
f"Question: {query}\n\n"
```
A crafted query can override instructions and inject arbitrary search queries into the mesh retriever, potentially surfacing sensitive documents.

2. No LLM error handling (HIGH)

`await self.llm(prompt)` has no try/except. Timeouts, rate limits, or connection errors propagate unhandled — no fallback to single-step plan. Compare with `_find_anchors` in `neural_mesh_retriever.py` which wraps external calls.

3. No per-step error handling (MEDIUM)

`execute()` calls `mesh_retriever.retrieve()` in a loop with no error handling. One transient failure kills the entire decomposition chain.

Discovered During

Code review of PR #2147 (QueryDecomposer)

Recommended Fixes

  1. Prompt injection: Add input length limit (500 chars), sanitize control chars, use structured message format
  2. LLM error handling: Wrap in try/except, return single-step fallback
  3. Per-step handling: Wrap individual step retrieval in try/except, continue with empty evidence on failure

Impact

Severity: high — security + reliability in production

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions