Skip to content

P0: Close FIM instruction injection vector #62

@galic1987

Description

@galic1987

Source

ChatGPT security review feedback

Problem

FIM prompt construction concatenates user-controlled "instruction" text directly into the prompt body. This allows instruction injection — a malicious instruction containing FIM tokens, delimiter tokens, or "ignore previous" patterns could alter the prompt structure.

Fix

  • Don't concatenate instruction inside the FIM prompt body
  • Keep FIM strictly prefix/suffix
  • Put instruction in a separate system/developer message or sanitize it as metadata
  • Add test case: instruction includes FIM tokens / delimiter tokens / "ignore previous" → ensure it cannot change prompt structure

Relevant Code

  • src/tools/fim.rs

Priority

P0 — security vector

Labels

security, P0

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions