| title | Getting Started |
|---|---|
| description | Get up and running with your first Agent. |
Install the Smallest AI SDK using pip:
pip install smallestaiYou will also need to set your OpenAI API key:
export OPENAI_API_KEY=sk_...Let's build a simple "Echo Agent" that listens to user input and responds using an LLM.
Create a file named my_agent.py:
# my_agent.py
from smallestai.atoms.agent.nodes import OutputAgentNode
from smallestai.atoms.agent.clients.openai import OpenAIClient
class MyAgent(OutputAgentNode):
def __init__(self):
super().__init__(name="my-agent")
# Initialize the OpenAI client with a model
self.llm = OpenAIClient(model="gpt-4o-mini")
async def generate_response(self):
# Stream the response from the LLM based on the current context
# self.context.messages provides access to the conversation history
async for chunk in await self.llm.chat(self.context.messages, stream=True):
if chunk.content:
yield chunk.contentCreate a file named server.py to run your agent in a session:
# server.py
from smallestai.atoms.agent.server import AtomsApp
from smallestai.atoms.agent.session import AgentSession
from my_agent import MyAgent
async def setup(session: AgentSession):
# Add the agent node to the session
session.add_node(MyAgent())
# Start the session
await session.start()
if __name__ == "__main__":
# Initialize the AtomsApp with the setup handler
app = AtomsApp(setup_handler=setup)
app.run(port=8080)Run your server:
python server.pyYour agent is now listening for WebSocket connections on port 8080!