-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Source
ChatGPT security review feedback
Problem
FIM prompt construction concatenates user-controlled "instruction" text directly into the prompt body. This allows instruction injection — a malicious instruction containing FIM tokens, delimiter tokens, or "ignore previous" patterns could alter the prompt structure.
Fix
- Don't concatenate instruction inside the FIM prompt body
- Keep FIM strictly prefix/suffix
- Put instruction in a separate system/developer message or sanitize it as metadata
- Add test case: instruction includes FIM tokens / delimiter tokens / "ignore previous" → ensure it cannot change prompt structure
Relevant Code
src/tools/fim.rs
Priority
P0 — security vector
Labels
security, P0
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request