Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
502 changes: 502 additions & 0 deletions .agents/skills/strandsagents/SKILL.md

Large diffs are not rendered by default.

434 changes: 434 additions & 0 deletions .agents/skills/strandsagents/references/docker.md

Large diffs are not rendered by default.

430 changes: 430 additions & 0 deletions .agents/skills/strandsagents/references/mcp.md

Large diffs are not rendered by default.

405 changes: 405 additions & 0 deletions .agents/skills/strandsagents/references/multi-agent.md

Large diffs are not rendered by default.

422 changes: 422 additions & 0 deletions .agents/skills/strandsagents/references/observability.md

Large diffs are not rendered by default.

128 changes: 128 additions & 0 deletions .agents/skills/strandsagents/references/openai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# OpenAI Integration

## Basic Configuration

```python
from strands import Agent
from strands.models import OpenAIModel

# OpenAI with API key
openai_model = OpenAIModel(
model_id="gpt-4o",
client_args={"api_key": "your-openai-api-key"},
params={"temperature": 0.7, "max_tokens": 2048}
)
agent = Agent(model=openai_model)
response = agent("Hello, how are you?")
```

## Environment Variables

Set your API key via environment variable:

```bash
export OPENAI_API_KEY="your-openai-api-key"
```

Then use without explicit key:

```python
from strands import Agent
from strands.models import OpenAIModel

openai_model = OpenAIModel(
model_id="gpt-4o",
params={"temperature": 0.7, "max_tokens": 2048}
)
agent = Agent(model=openai_model)
```

## Available Models

- `gpt-4o` - Latest GPT-4 Optimized
- `gpt-4o-mini` - Faster, cost-effective
- `gpt-4-turbo` - GPT-4 Turbo
- `gpt-3.5-turbo` - Fast and efficient

## Model Parameters

```python
openai_model = OpenAIModel(
model_id="gpt-4o",
params={
"temperature": 0.7, # 0.0-2.0, controls randomness
"max_tokens": 2048, # Maximum response length
"top_p": 0.9, # Nucleus sampling
"frequency_penalty": 0, # -2.0 to 2.0
"presence_penalty": 0 # -2.0 to 2.0
}
)
```

## Streaming Responses

```python
from strands import Agent
from strands.models import OpenAIModel
import asyncio

async def stream_openai():
openai_model = OpenAIModel(
model_id="gpt-4o",
client_args={"api_key": "your-openai-api-key"}
)
agent = Agent(model=openai_model)

async for event in agent.stream_async("Tell me a story"):
if "data" in event:
print(event["data"], end="", flush=True)

asyncio.run(stream_openai())
```

## With Custom Tools

```python
from strands import Agent, tool
from strands.models import OpenAIModel

@tool
def get_weather(city: str) -> dict:
"""Get weather for a city."""
return {
"status": "success",
"content": [{"text": f"Weather in {city}: Sunny, 22°C"}]
}

openai_model = OpenAIModel(
model_id="gpt-4o",
client_args={"api_key": "your-openai-api-key"}
)
agent = Agent(model=openai_model, tools=[get_weather])
response = agent("What's the weather in Paris?")
```

## Error Handling

```python
from strands import Agent
from strands.models import OpenAIModel

try:
openai_model = OpenAIModel(
model_id="gpt-4o",
client_args={"api_key": "your-openai-api-key"}
)
agent = Agent(model=openai_model)
response = agent("Hello!")
except Exception as e:
print(f"Error: {e}")
```

## Best Practices

1. **API Key Security**: Never hardcode API keys. Use environment variables or secure vaults.
2. **Rate Limits**: OpenAI has rate limits. Implement retry logic for production.
3. **Cost Management**: Monitor token usage via `result.metrics` to control costs.
4. **Model Selection**: Use `gpt-4o-mini` for cost-effective tasks, `gpt-4o` for complex reasoning.
5. **Temperature**: Lower (0.0-0.3) for deterministic outputs, higher (0.7-1.0) for creative tasks.
79 changes: 79 additions & 0 deletions .agents/skills/strandsagents/references/quickstart.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Strands Agents - Quick Start

## Installation

```bash
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate

# Install Strands and tools
pip install strands-agents strands-agents-tools
```

## Basic Agent

```python
from strands import Agent

# Default Bedrock model
agent = Agent()
response = agent("What is the capital of France?")
print(response)
```

## Agent with Custom Model

```python
from strands import Agent
from strands.models import BedrockModel

agent = Agent(
model=BedrockModel(model_id="us.anthropic.claude-sonnet-4-20250514-v1:0"),
system_prompt="You are a helpful coding assistant. Be concise and provide examples."
)
result = agent("How do I read a JSON file in Python?")
print(result.message)
print(result.stop_reason) # "end_turn", "max_tokens", etc.
print(result.metrics) # Performance metrics
```

## Agent with Built-in Tools

```python
from strands import Agent
from strands_tools import calculator

agent = Agent(tools=[calculator])
response = agent("What is the square root of 1764?")
```

## Agent with Initial State

```python
from strands import Agent

agent = Agent(
messages=[
{"role": "user", "content": [{"text": "My name is Alice"}]},
{"role": "assistant", "content": [{"text": "Nice to meet you, Alice!"}]}
],
state={"user_preference": "dark_mode"}
)
response = agent("What's my name?") # Agent remembers: "Your name is Alice"
```

## Async Streaming

```python
from strands import Agent
import asyncio

async def stream_response():
agent = Agent()
async for event in agent.stream_async("Tell me a story"):
if "data" in event:
print(event["data"], end="", flush=True)

asyncio.run(stream_response())
```
Loading