Skip to content

Latest commit

 

History

History
137 lines (109 loc) · 3.55 KB

File metadata and controls

137 lines (109 loc) · 3.55 KB
title Getting Started
description From zero to a running AI agent in 5 minutes.

This guide will walk you through installing the SDK, writing your first intelligent agent, and running it.

1. Prerequisites

**OpenAI API Key required.** Set it as an environment variable before running your agent: ```bash export OPENAI_API_KEY="your-key-here" ```

2. Installation

pip install smallestai

3. Write Your First Agent

We need two files: one for the agent logic, and one to run the application.

This file defines *what* your agent does.
```python my_agent.py
import os
from smallestai.atoms.agent.nodes import OutputAgentNode
from smallestai.atoms.agent.clients.openai import OpenAIClient

class MyAgent(OutputAgentNode):
    def __init__(self):
        super().__init__(name="my-agent")
        self.llm = OpenAIClient(
            model="gpt-4o-mini",
            api_key=os.getenv("OPENAI_API_KEY")
        )

    async def generate_response(self):
        async for chunk in await self.llm.chat(self.context.messages, stream=True):
            if chunk.content:
                yield chunk.content
```
This file is the entry point that runs your agent.
```python main.py
from smallestai.atoms.agent.server import AtomsApp
from smallestai.atoms.agent.session import AgentSession
from my_agent import MyAgent

async def on_start(session: AgentSession):
    session.add_node(MyAgent())
    await session.start()
    await session.wait_until_complete()

if __name__ == "__main__":
    app = AtomsApp(setup_handler=on_start)
    app.run()
```

<Tip>
  Your entry point can be named anything (`app.py`, `run.py`, etc.). When deploying, specify it with `--entry-point your_file.py`.
</Tip>

4. Run Your Agent

Once your files are ready, you have two options:

For development and testing, run the file directly:
```bash
python main.py
```

This starts a WebSocket server on `localhost:8080`. In a separate terminal, connect to it:

```bash
smallestai agent chat
```

No account or deployment needed.
To have SmallestAI host your agent in the cloud (for production, API access, or phone calls):
<Note>
  **Prerequisite:** You must first create an agent on the [Atoms platform](https://atoms.smallest.ai). The `agent init` command links your local code to that agent.
</Note>

<Steps>
  <Step title="Login">
    ```bash
    smallestai auth login
    ```
  </Step>
  <Step title="Initialize">
    Link your directory to an existing agent on the platform.
    ```bash
    smallestai agent init
    ```
  </Step>
  <Step title="Deploy">
    Push your code to the cloud.
    ```bash
    smallestai agent deploy --entry-point main.py
    ```
  </Step>
  <Step title="Make Live">
    Run `smallestai agent builds`, select your build, and choose **Make Live**.
  </Step>
</Steps>

What's Next?

Give your agent calculators, search, and APIs. Connect multiple agents for complex workflows.