-
-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Problem
`services/neural_mesh/query_decomposer.py` (PR #2147) has two security/reliability gaps:
1. Prompt injection (HIGH)
User query is interpolated directly into the LLM decomposition prompt:
```python
f"Question: {query}\n\n"
```
A crafted query can override instructions and inject arbitrary search queries into the mesh retriever, potentially surfacing sensitive documents.
2. No LLM error handling (HIGH)
`await self.llm(prompt)` has no try/except. Timeouts, rate limits, or connection errors propagate unhandled — no fallback to single-step plan. Compare with `_find_anchors` in `neural_mesh_retriever.py` which wraps external calls.
3. No per-step error handling (MEDIUM)
`execute()` calls `mesh_retriever.retrieve()` in a loop with no error handling. One transient failure kills the entire decomposition chain.
Discovered During
Code review of PR #2147 (QueryDecomposer)
Recommended Fixes
- Prompt injection: Add input length limit (500 chars), sanitize control chars, use structured message format
- LLM error handling: Wrap in try/except, return single-step fallback
- Per-step handling: Wrap individual step retrieval in try/except, continue with empty evidence on failure
Impact
Severity: high — security + reliability in production