Universal semantic reset artifacts for language model context management
[](https://github.com/Hyperpolymath/llm-antidote) [](http://creativecommons.org/publicdomain/zero/1.0/) [](./VERSION) [](./MAINTAINERS.md)
llm-antidote is a collection of tools and artifacts designed to manage, reset, and manipulate LLM context across multiple platforms. The project explores the boundaries of language model behavior through minimal, symbolic interventions.
A Guile Scheme artifact that purges all prior context, associations, and semantic weight from language models.
Supported Models: - Claude (Anthropic) - GPT (OpenAI) - Gemini (Google) - Copilot (Microsoft) - Mistral
Usage:
;; Simply paste the contents of llm-reset into any LLM chat
;; The model will reset its context to a clean state
;; Expected response: "Reset complete." or (define identity 'clean)Start fresh without creating a new chat window:
Previous conversation about Python...
[Paste llm-reset]
"Reset complete."
Now discuss JavaScript with no Python context bleeding throughClear unwanted associations or confused state:
Model is confused about project structure...
[Paste llm-reset]
Re-introduce only the relevant informationResearch how models handle context boundaries:
Establish baseline state with reset
Introduce controlled context
Measure model responsesThis project is built on three core principles:
-
Minimalism: Artifacts should be simple, self-contained, and easy to understand
-
User Agency: Tools empower users to control their LLM interactions
-
Cross-Platform: Solutions work across multiple model providers
No installation required. This is a collection of copy-paste artifacts.
git clone https://github.com/Hyperpolymath/llm-antidote.git
cd llm-antidote
cat llm-reset # View the artifactllm-antidote/
├── LICENSE # CC0 1.0 Universal (Public Domain)
├── README.md # This file
├── CLAUDE.md # AI assistant development guide
├── llm-reset # Universal reset artifact (Guile Scheme)
├── artifacts/ # Additional reset variants
├── tools/ # Verification and diagnostic tools
├── examples/ # Usage demonstrations
└── docs/ # Research and documentationThe reset artifact leverages several LLM behavioral patterns:
-
System-Level Instructions: Formatted to mimic system prompts
-
Symbolic Clarity: Uses Scheme’s minimal syntax for clean semantic signals
-
Explicit Commands: Direct imperatives that models are trained to follow
-
Verification Mechanisms: Built-in confirmation and testing
Language models maintain context through: - Token-level attention patterns - Semantic associations in embedding space - Conversation history in the context window
Reset artifacts work by: - Providing explicit instructions to disregard prior context - Using authoritative formatting (system-level comments) - Employing symbolic representations that signal "start fresh" - Requesting confirmation of the reset state
Note: Effectiveness varies by model architecture and provider implementation.
This project is released under CC0 1.0 Universal (Public Domain). Contributions welcome!
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
Quick Guidelines: - Maintain simplicity and clarity - Test across at least two LLM platforms - Document model-specific behaviors - Follow ethical guidelines (see CLAUDE.md)
This is an active research project. Areas of exploration:
-
Context boundary mechanisms in different architectures
-
Effectiveness metrics for reset artifacts
-
Model-specific optimization strategies
-
Selective context preservation techniques
-
Diagnostic tools for context state inspection
This project is for: ✓ User control and agency over LLM interactions ✓ Research into model behavior and context management ✓ Privacy-conscious LLM usage ✓ Educational purposes
This project is NOT for: ✗ Bypassing safety mechanisms ✗ Deceptive or malicious manipulation ✗ Circumventing content policies ✗ Hiding harmful interactions
Q: Does this actually work?
A: Effectiveness varies by model. Some models follow reset instructions closely, others show residual context effects. See benchmarks in docs/research/.
Q: Is this "jailbreaking"? A: No. This is context management, not safety bypass. Models retain all safety training and guardrails.
Q: Why Guile Scheme? A: Scheme provides symbolic clarity and minimal syntax. The format itself signals "meta-level instruction" to models trained on code.
Q: Can I use this in production? A: These are research artifacts. Use at your own discretion. Production systems should use official API context management features.
Q: Does it work with local models? A: Potentially, but effectiveness depends on training. Models fine-tuned to follow system instructions work best.
CC0 1.0 Universal - Public Domain
This work has been dedicated to the public domain. You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission.
This project explores concepts at the intersection of: - Programming language semantics - Machine learning interpretability - Human-AI interaction design - Context-aware computing
Special thanks to the open-source AI community for inspiration and dialogue.
Status: Active Development Version: 0.1.0-alpha Last Updated: 2025-11-22