Skip to content

Latest commit

 

History

History
86 lines (49 loc) · 5.22 KB

File metadata and controls

86 lines (49 loc) · 5.22 KB

Frequently Asked Questions


What is PromptX?

PromptX is a specification for writing structured programs in natural language that AI models execute directly. The AI is the runtime — no compiler, interpreter, or SDK required.

How is this different from just writing a good prompt?

A good prompt tells the AI what to do. A PromptX program tells the AI what to do, in what order, with what data, under what conditions, with what error handling, and how to manage its own context. It's the difference between asking someone a question and giving them a project plan.

Does it work on [my preferred AI model]?

If your model can follow multi-step instructions and maintain context, PromptX will work. Frontier models (Claude, GPT-4o, Gemini 2.0) handle complex programs well. Smaller models work great with simpler programs. See Provider Compatibility for details.

Is this a real programming language?

Not in the traditional sense. There's no formal grammar, no compiler, no type system. It's a specification — a structured way to write prompts that any AI can interpret as executable programs. Think of it as a protocol for human-AI communication.

Why not just use LangChain / LMQL / DSPy?

Those are excellent tools that solve different problems. They require Python, external infrastructure, and specific provider integrations. PromptX requires nothing — just the AI and your prompt. They're complimentary: you might use LangChain to orchestrate PromptX programs at scale.

Are the results deterministic?

No. AI models are probabilistic. Running the same program twice may produce different (but hopefully equivalent) results. PromptX provides structure and constraints to improve consistency, but does not guarantee determinism. This is a feature, not a bug — it's what makes AI creative and adaptable.

How do I debug a PromptX program?

  1. Set thinking: visible in @config to see intermediate reasoning
  2. Add @verify steps between critical stages
  3. Use explicit $variables so you can track state
  4. Run the program step-by-step (paste one step at a time)
  5. Check the output of each step before proceeding

Can PromptX programs call external APIs?

Only if the AI model supports tool use. @search leverages web search when available. For custom APIs, you'd need model-specific function calling — which is outside PromptX's provider-agnostic scope. Future versions may address this.

What's the maximum program size?

Limited by the model's context window. A typical program (10-15 steps) uses 2,000-5,000 tokens for the program itself, leaving plenty of room for processing. For very large programs, use the Checkpoint and Continue pattern.

Can I nest PromptX programs?

Not directly in v0.1. But you can use subroutines (@define/@call) for reusable blocks within a program. Multi-program orchestration is planned for v0.3.

Is PromptX copyrighted?

The specification is released under the MIT License. Use it, extend it, build on it, commercially or otherwise.

How do I contribute?

See CONTRIBUTING.md. The most valuable contributions right now are cross-provider test results and new sample programs.

What's the roadmap?

Version Focus
v0.1 (current) Core spec — sequential, branching, loops, variables, error handling
v0.2 Subroutine library — community-contributed reusable blocks
v0.3 Multi-agent patterns — programs spanning multiple AI instances
v0.4 Context management — formal checkpoint/continuation protocol
v0.5 Testing framework — @test blocks for program validation
v1.0 Stable specification with community consensus

Why the .promptx file extension?

To make programs identifiable and to enable future tooling (syntax highlighting, linters, VS Code extensions). The extension is a convention, not a requirement — the contents are plain text.

Can an AI write PromptX programs?

Absolutely. Ask any AI: "Write a PromptX program that [does X]." The structured format makes it easy for AI to generate valid programs. This is by design — PromptX is AI-native.

Why is there a <!-- PROMPTX RUNTIME --> comment at the top of the sample files?

That's the execution preamble — a short instruction block that tells the AI to execute the program rather than explain it. Without it, some AI models (especially when encountering unfamiliar syntax) default to describing what the program does instead of running it. The preamble is embedded directly in program files so they work as standalone copy-paste experiences.

What's the difference between samples/ and ready-to-run/?

samples/ contains annotated teaching examples — programs designed to show you how PromptX syntax works, organized by difficulty. They have comments explaining the features being demonstrated.

ready-to-run/ contains pre-configured programs for real tasks — no PromptX knowledge needed. They have hardcoded defaults so you get useful output immediately, and clear instructions on how to customize them for your specific situation.

Start with ready-to-run/ if you want results. Start with samples/ if you want to learn.