Alita SDK, built on top of Langchain, enables the creation of intelligent agents within the Alita Platform using project-specific prompts and data sources. This SDK is designed for developers looking to integrate advanced AI capabilities into their projects with ease.
Before you begin, ensure you have the following requirements met:
- Python 3.10+
- An active deployment of Project Alita
- Access to personal project
It is recommended to use a Python virtual environment to avoid dependency conflicts and keep your environment isolated.
For Unix/macOS:
python3 -m venv .venv
source .venv/bin/activateFor Windows:
python -m venv .venv
venv\Scripts\activateInstall all required dependencies for the SDK and toolkits:
pip install -U '.[all]'Before running your Alita agents, set up your environment variables. Create a .env file in the root directory of your project and include your Project Alita credentials:
DEPLOYMENT_URL=<your_deployment_url>
API_KEY=<your_api_key>
PROJECT_ID=<your_project_id>NOTE: these variables can be grabbed from your Elitea platform configuration page.

By default, the CLI looks for .env files in the following order:
.alita/.env(recommended).envin the current directory
You can override this by setting the ALITA_ENV_FILE environment variable:
export ALITA_ENV_FILE=/path/to/your/.env
alita-cli agent chatThe Alita SDK includes a powerful CLI for interactive agent chat sessions.
# Interactive selection (shows all available agents + direct chat option)
alita-cli agent chat
# Chat with a specific local agent
alita-cli agent chat .alita/agents/my-agent.agent.md
# Chat with a platform agent
alita-cli agent chat my-agent-nameYou can start a chat session directly with the LLM without any agent configuration:
alita-cli agent chat
# Select option 1: "Direct chat with model (no agent)"This is useful for quick interactions or testing without setting up an agent.
During a chat session, you can use the following commands:
| Command | Description |
|---|---|
/help |
Show all available commands |
/model |
Switch to a different model (preserves chat history) |
/add_mcp |
Add an MCP server from your local mcp.json (preserves chat history) |
/add_toolkit |
Add a toolkit from $ALITA_DIR/tools (preserves chat history) |
/clear |
Clear conversation history |
/history |
Show conversation history |
/save |
Save conversation to file |
exit |
End conversation |
The chat interface includes readline-based input enhancements:
| Feature | Key/Action |
|---|---|
| Tab completion | Press Tab to autocomplete commands (e.g., /mo โ /model) |
| Command history | โ / โ arrows to navigate through previous messages |
| Cursor movement | โ / โ arrows to move within the current line |
| Start of line | Ctrl+A jumps to the beginning of the line |
| End of line | Ctrl+E jumps to the end of the line |
| Delete word | Ctrl+W deletes the word before cursor |
| Clear line | Ctrl+U clears from cursor to beginning of line |
Use /model to switch models on the fly:
> /model
๐ง Select a model:
# Model Type
1 gpt-4o openai
2 gpt-4o-mini openai
3 claude-3-sonnet anthropic
Select model number: 1
โ Selected: gpt-4o
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โน Model switched to gpt-4o. Agent state reset, chat history โ
โ preserved. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Use /add_mcp to add MCP servers during a chat session. Servers are loaded from your local mcp.json file (typically at .alita/mcp.json):
> /add_mcp
๐ Select an MCP server to add:
# Server Type Command/URL
1 playwright stdio npx @playwright/mcp@latest
2 filesystem stdio npx @anthropic/mcp-fs
Select MCP server number: 1
โ Selected: playwright
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โน Added MCP: playwright. Agent state reset, chat history โ
โ preserved. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Use /add_toolkit to add toolkits from your $ALITA_DIR/tools directory (default: .alita/tools):
> /add_toolkit
๐งฐ Select a toolkit to add:
# Toolkit Type File
1 jira jira jira-config.json
2 github github github-config.json
Select toolkit number: 1
โ Selected: jira
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โน Added toolkit: jira. Agent state reset, chat history โ
โ preserved. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
To use the SDK with Streamlit for local development, follow these steps:
-
Ensure you have Streamlit installed:
pip install streamlit
-
Run the Streamlit app:
streamlit run alita_local.py
Note: If streamlit throws an error related to pytorch, add this --server.fileWatcherType none extra arguments.
Sometimes it tries to index pytorch modules, and since they are C modules it raises an exception.
Example of launch configuration for Streamlit:
Important: Make sure to set the correct path to your .env file and streamlit.

The Alita SDK includes a Streamlit web application that provides a user-friendly interface for interacting with Alita agents. This application is powered by the streamlit.py module included in the SDK.
- Agent Management: Load and interact with agents created in the Alita Platform
- Authentication: Easily connect to your Alita/Elitea deployment using your credentials
- Chat Interface: User-friendly chat interface for communicating with your agents
- Toolkit Integration: Add and configure toolkits for your agents
- Session Management: Maintain conversation history and thread state
-
Authentication:
- Navigate to the "Alita Settings" tab in the sidebar
- Enter your deployment URL, API key, and project ID
- Click "Login" to authenticate with the Alita Platform
-
Loading an Agent:
- After authentication, you'll see a list of available agents
- Select an agent from the dropdown menu
- Specify a version name (default: 'latest')
- Optionally, select an agent type and add custom tools
- Click "Load Agent" to initialize the agent
-
Interacting with the Agent:
- Use the chat input at the bottom of the screen to send messages to the agent
- The agent's responses will appear in the chat window
- Your conversation history is maintained until you clear it
-
Clearing Data:
- Use the "Clear Chat" button to reset the conversation history
- Use the "Clear Config" button to reset toolkit configurations
This web application simplifies the process of testing and interacting with your Alita agents, making development and debugging more efficient.
Actually, toolkits are part of the Alita SDK (alita-sdk/tools), so you can use them in your local development environment as well.
To debug it, you can use the alita_local.py file, which is a Streamlit application that allows you
to interact with your agents and toolkits by setting the breakpoints in the code of corresponding tool.
Assume we try to debug the user's agent called Questionnaire with the Confluence toolkit and get_pages_with_label method.
Pre-requisites:
- Make sure you have set correct variables in your
.envfile - Set the breakpoints in the
alita_sdk/tools/confluence/api_wrapper.pyfile, in theget_pages_with_labelmethod
-
Run the Streamlit app (using debug):
streamlit run alita_local.py
-
Login into the application with your credentials (populated from .env file)
- Enter your deployment URL, API key, and project ID (optionally)
- Click "Login" to authenticate with the Alita Platform
-
Select
Questionnaireagent -
Query the agent with the required prompt:
get pages with label `ai-mb` -
Debug the agent's code:
The toolkit is a collection of pre-built tools and functionalities designed to simplify the development of AI agents. These toolkits provide developers with the necessary resources, such as APIs, data connectors to required services and systems. As an initial step, you have to decide on its capabilities to design required tools and its args schema. Example of the Testrail toolkit's capabilities:
get_test_cases: Retrieve test cases from Testrailget_test_runs: Retrieve test runs from Testrailget_test_plans: Retrieve test plans from Testrailcreate_test_case: Create a new test case in Testrail- etc.
Create a new package under alita_sdk/tools/ for your toolkit, e.g., alita_sdk/tools/mytoolkit/.
Create an api_wrapper.py file in your toolkit directory. This file should:
- Define a config class (subclassing
BaseToolApiWrapper). - Implement methods for each tool/action you want to implement.
- Provide a
get_available_tools()method that returns tools' metadata and argument schemas.
Note:
- args schema should be defined using Pydantic models, which will help in validating the input parameters for each tool.
- make sure tools descriptions are clear and concise, as they will be used by LLM to define on tool's execution chain.
- clearly define the input parameters for each tool, as they will be used by LLM to generate the correct input for the tool and whether it is required or optional (refer to https://docs.pydantic.dev/2.2/migration/#required-optional-and-nullable-fields if needed).
Example:
# alita_sdk/tools/mytoolkit/api_wrapper.py
from ...elitea_base import BaseToolApiWrapper
from pydantic import create_model, Field
class MyToolkitConfig(BaseToolApiWrapper):
# Define config fields (e.g., API keys, endpoints)
api_key: str
def do_something(self, param1: str):
"""Perform an action with param1."""
# Implement your logic here
return {"result": f"Did something with {param1}"}
def get_available_tools(self):
return [
{
"name": "do_something",
"ref": self.do_something,
"description": self.do_something.__doc__,
"args_schema": create_model(
"DoSomethingModel",
param1=(str, Field(description="Parameter 1"))
),
}
]Create an __init__.py file in your toolkit directory. This file should:
- Define a
toolkit_config_schema()static method for toolkit's configuration (this data is used for toolkit configuration card rendering on UI). - Implement a
get_tools(tool)method to grab toolkit's configuration parameters based on the configuration on UI. - Implement a
get_toolkit()class method to instantiate tools. - Return a list of tool instances via
get_tools(). Example:
# alita_sdk/tools/mytoolkit/__init__.py
from pydantic import BaseModel, Field, create_model
from langchain_core.tools import BaseToolkit, BaseTool
from .api_wrapper import MyToolkitConfig
from ...base.tool import BaseAction
name = "mytoolkit"
def get_tools(tool):
return MyToolkit().get_toolkit(
selected_tools=tool['settings'].get('selected_tools', []),
url=tool['settings']['url'],
password=tool['settings'].get('password', None),
email=tool['settings'].get('email', None),
toolkit_name=tool.get('toolkit_name')
).get_tools()
class MyToolkit(BaseToolkit):
tools: list[BaseTool] = []
@staticmethod
def toolkit_config_schema() -> BaseModel:
return create_model(
name,
url=(str, Field(title="Base URL", description="Base URL for the API")),
email=(str, Field(title="Email", description="Email for authentication", default=None)),
password=(str, Field(title="Password", description="Password for authentication", default=None)),
selected_tools=(list[str], Field(title="Selected Tools", description="List of tools to enable", default=[])),
)
@classmethod
def get_toolkit(cls, selected_tools=None, toolkit_name=None, **kwargs):
config = MyToolkitConfig(**kwargs)
available_tools = config.get_available_tools()
tools = []
for tool in available_tools:
if selected_tools and tool["name"] not in selected_tools:
continue
tools.append(BaseAction(
api_wrapper=config,
name=tool["name"],
description=tool["description"],
args_schema=tool["args_schema"]
))
return cls(tools=tools)
def get_tools(self) -> list[BaseTool]:
return self.toolsUpdate the __init__.py file in the alita_sdk/tools/ directory to include your new toolkit:
# alita_sdk/tools/__init__.py
def get_tools(tools_list, alita: 'AlitaClient', llm: 'LLMLikeObject', *args, **kwargs):
...
# add your toolkit here with proper type
elif tool['type'] == 'mytoolkittype':
tools.extend(get_mytoolkit(tool))
# add toolkit's config schema
def get_toolkits():
return [
...,
MyToolkit.toolkit_config_schema(),
]To test your toolkit, you can use the Streamlit application (alita_local.py) to load and interact with your toolkit.
- Login to the platform
- Select
Toolkit testingtab - Choose your toolkit from the dropdown menu.
- Adjust the configuration parameters as needed, and then test the tools by sending queries to them.
NOTE: use function mode for testing of required tool.




