Official System Prompts for TyloAI
This repository contains the official system prompts utilized by TyloAI to power our AI products. We firmly believe that openness and transparency are fundamental to building Responsible AI.
We are open-sourcing these prompts with the following objectives:
- Enhance Transparency: Provide the public with insight into the guiding principles of our AI systems.
- Foster Community Collaboration: Invite developers and AI enthusiasts worldwide to collaborate with us on improving AI safety and effectiveness.
- Establish Industry Benchmarks: Share our best practices in AI safety, ethics, and performance.
- Improve User Efficiency: Enable users to generate higher-quality AI outputs through well-crafted, vetted prompts.
Our system prompts adhere to the following core principles:
- Safety & Ethics: Proactively prevent harmful, discriminatory, or unethical outputs.
- Accuracy & Reliability: Prioritize factual correctness and minimize AI hallucinations.
- Clarity & Efficiency: Ensure precise task comprehension and delivery of valuable responses.
- Role-Based Guidance: Define specific personas and responsibilities for different use cases.
- Actionable Structure: Provide clear processes, output formats, and quality checkpoints.
TyloAI-System-Prompts/
├── .github/
│ └── CODE_OF_CONDUCT.md # Community guidelines
├── prompts/
│ ├── general/
│ │ └── helpful_assistant.md # Base assistant prompt
│ ├── domain-specific/
│ │ ├── customer_service/
│ │ ├── empathetic_agent.md
│ │ ├── billing_specialist.md
│ │ ├── tutoring_mentor.md
│ │ ├── python_expert.md
│ │ ├── web_build.md
│ │ ├── blog_writer.md
│ │ └── email_campaigner.md
├── CONTRIBUTING.md # Contribution guidelines
├── LICENSE # Apache 2.0
├── README.md # This file
└── RESPONSIBLE_AI.md # AI ethics framework
- Browse the prompts in the
/promptsdirectory to find what suits your needs - Copy the prompt from the relevant
.mdfile - Customize variables (marked with
[brackets]) for your specific context - Use with any LLM: Paste as a system message when initializing conversations with your preferred AI model
// Example: Using TyloAI prompts with your LLM API
const fs = require('fs');
// Load the system prompt
const systemPrompt = fs.readFileSync('prompts/task-specific/code_generation/python_expert.md', 'utf8');
// Initialize with user context
const userContext = {
name: 'Alex',
level: 'intermediate',
experience: 'backend development'
};
// Customize the prompt with user data
const customizedPrompt = systemPrompt.replace(/\[USER_LEVEL\]/g, userContext.level);
// Use with your LLM API
const response = await llmAPI.chat({
systemMessage: customizedPrompt,
userMessage: "Help me write a Python function for data validation"
});Role-based prompts designed for specific professional contexts:
- Customer Service: Empathetic agents and billing specialists
- Education: Tutoring mentors and learning facilitators
- More coming soon: Healthcare, Finance, Legal, and HR domains
Goal-oriented prompts for particular tasks:
- Code Generation: Python expert, Web development specialist
- Content Creation: Blog writers, Email marketers
- Analysis: Data analysts, Research assistants
- More coming soon: Video scripts, Product specifications, Documentation
These prompts are designed to work with any LLM API that supports system prompt configuration, including but not limited to:
- TyloAI (ode family)
- OpenAI API (GPT-4, GPT-3.5)
- Anthropic API (Claude)
- Open-source models (Llama, Mistral, etc.)
- Any custom AI infrastructure
// Declare systemMessage FIRST before using it - this is crucial for proper variable scoping
let systemMessage = `This is your system message. You can set some variables, for example:
User Context:
- Name: ${userData.nickname || 'User'} // Fallback to 'User' if nickname is not available
- Level: ${userData.level || 1} // Default to level 1 if not specified
- Language: ${currentLanguage} // Current interface language setting
- Membership: ${userData.isPro ? 'Pro' : userData.isStudent ? 'Student' : 'Free'} // Ternary check for membership type
${userData.preferences ? `- Preferences: ${userData.preferences}` : ''} // Conditionally include preferences if they exist
To enhance the experience, you can also instruct the AI to:
Respond in ${currentLanguage} unless specifically asked otherwise.`; // Primary language instruction for AI responses
// Usage note: This systemMessage template dynamically injects user data
// and provides contextual guidance to the AI assistantEach prompt includes customizable sections:
- [COMPANY_NAME]: Your organization name
- [INDUSTRY]: Specific industry or domain
- [CONSTRAINT]: Custom restrictions or requirements
- [OUTPUT_FORMAT]: Expected result structure
- [USER_CONTEXT]: Demographic or role information
Replace these brackets with your specific values before using.
We provide quality checklists with each prompt to help you verify outputs:
- Accuracy Score: How factually correct is the response?
- Completeness: Does it cover all requested elements?
- Clarity: Is the output easy to understand and actionable?
- Safety: Are there any harmful or biased elements?
We warmly welcome contributions from the community! Whether it's fixing a typo, refining existing prompts, or proposing entirely new prompt concepts.
- New domain-specific prompts (Healthcare, Finance, Legal, etc.)
- Improved prompt variants for existing categories
- Translations to other languages
- Better examples and use cases
- Performance improvements or refinements
- Documentation and guides
Before making a contribution, please review our Contributing Guidelines (CONTRIBUTING.md).
To maintain a healthy and inclusive community environment, all participants are expected to adhere to our Community Code of Conduct (CODE_OF_CONDUCT.md).
Our approach to AI safety and ethics is detailed in RESPONSIBLE_AI.md. This document outlines:
- Principles for safe prompt design
- Bias mitigation strategies
- Privacy and data handling
- Continuous improvement processes
Q: Can I use these prompts commercially? A: Yes! Under the Apache 2.0 license, you can use these prompts for commercial purposes. See LICENSE for details.
Q: Can I modify the prompts? A: Absolutely. We encourage you to adapt them to your specific needs and workflows.
Q: Will these work with my AI model? A: These prompts are designed to be model-agnostic. They should work with any modern LLM, though you may need minor adjustments based on your specific model's capabilities.
Q: How often are prompts updated? A: We regularly refine existing prompts based on community feedback and new best practices. Check the CHANGELOG for updates.
Q: Can I suggest improvements? A: Yes! Please open an issue or submit a pull request with your suggestions.
- Add healthcare domain prompts
- Add financial services prompts
- Add legal document analysis prompts
- Add HR/recruitment prompts
- Multi-language support
- Prompt versioning system
- Community voting on new prompts
- Automated prompt testing framework
This project is licensed under the Apache License 2.0. Complete terms are available in the LICENSE file.
Having issues or questions?
- Open an Issue: GitHub Issues
- Discussions: GitHub Discussions
- Email: support@tyloai.com