Test your AI agents with real-time streaming conversations.
The agent testing feature allows you to interact with your configured agents in a chat interface before deploying them to production tasks. This helps validate agent behavior, system prompts, and model configurations.
- Navigate to the Agents page
- Find the agent you want to test
- Click the Test Agent button at the bottom of the agent card
- Or click the three-dot menu and select Test Agent
- Click on an agent to view its details
- Click the Test Agent button in the header
- The test modal will open
The test interface provides:
- Real-time streaming responses: See the agent's response as it's generated
- Chat history: View the conversation history during the test session
- Message input: Send messages to test different scenarios
- Error handling: Clear error messages if something goes wrong
- Enter: Send message
- Shift + Enter: New line in message
When you test an agent, the system uses:
- Agent's configured system prompt
- Selected LLM model and provider
- Temperature, max tokens, and other model parameters
- Agent's capabilities and skills (if configured)
- Test edge cases: Try unusual inputs to see how the agent handles them
- Verify system prompt: Check if the agent follows its configured instructions
- Test different scenarios: Use various types of questions relevant to the agent's role
- Check response quality: Evaluate if the model and temperature settings are appropriate
- Test sessions are temporary and not saved
- Test conversations don't affect agent statistics or task counts
- Memory and knowledge base access may be limited in test mode
- Check that at least one LLM provider is configured in Settings
- Verify the provider is running and accessible
- Check network connectivity
- Verify the selected model is available
- Check LLM provider logs for errors
- Ensure the model has sufficient resources
- Check network connection
- Verify max tokens setting isn't too low
- Review backend logs for errors