Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 42 additions & 52 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
"@types/react": "^18.3.18",
"autoprefixer": "^10.4.21",
"lucide-react": "^0.460.0",
"next": "^15.2.1",
"next": "15.2.8",
"nextra": "3.3.1",
"nextra-docs-template": "0.0.11",
"nextra-theme-docs": "3.3.1",
Expand Down
4 changes: 0 additions & 4 deletions pages/docs/concepts/agent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,11 @@ Agents in Rig provide a high-level abstraction for working with LLMs, combining
An Agent consists of:

1. **Base Components**

- Completion Model (e.g., GPT-4, Claude)
- System Prompt (preamble)
- Configuration (temperature, max tokens)

2. **Context Management**

- Static Context: Always available documents
- Dynamic Context: RAG-based contextual documents
- Vector Store Integration
Expand Down Expand Up @@ -142,13 +140,11 @@ Bear in mind that while prompt hooks are not blocking, it's generally advisable
## Best Practices

1. **Context Management**

- Keep static context minimal and focused
- Use dynamic context for large knowledge bases
- Consider context window limitations

2. **Tool Integration**

- Prefer static tools for core functionality
- Use dynamic tools for context-specific operations
- Implement proper error handling in tools
Expand Down
4 changes: 0 additions & 4 deletions pages/docs/concepts/completion.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -135,15 +135,13 @@ let request = model.completion_request("prompt")
### Request Components

1. **Core Elements**

- Prompt text
- System preamble
- Chat history
- Temperature
- Max tokens

2. **Context Management**

- Document attachments
- Metadata handling
- Formatting controls
Expand Down Expand Up @@ -242,13 +240,11 @@ impl CompletionModel for CustomProvider {
## Best Practices

1. **Interface Selection**

- Use `Prompt` for simple interactions
- Use `Chat` for conversational flows
- Use `Completion` for fine-grained control

2. **Error Handling**

- Handle provider-specific errors
- Implement graceful fallbacks
- Log raw responses for debugging
Expand Down
2 changes: 0 additions & 2 deletions pages/docs/concepts/embeddings.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,13 +92,11 @@ qdrant.insert_documents(embeddings).await?;
## Best Practices

1. **Document Preparation**

- Clean and normalize text before embedding
- Consider chunking large documents
- Remove irrelevant embedding content

2. **Error Handling**

- Handle provider API errors gracefully
- Validate vector dimensions
- Check for empty or invalid input
Expand Down
2 changes: 0 additions & 2 deletions pages/docs/concepts/extractors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -130,13 +130,11 @@ impl<T: JsonSchema + for<'a> Deserialize<'a> + Serialize + Send + Sync> Tool for
## Best Practices

1. **Structure Design**

- Use `Option<T>` for optional fields
- Keep structures focused and minimal
- Document field requirements

2. **Error Handling**

- Handle both extraction and deserialization errors
- Provide fallback values where appropriate
- Log extraction failures for debugging
Expand Down
2 changes: 0 additions & 2 deletions pages/docs/concepts/loaders.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -130,13 +130,11 @@ impl<T: Loadable> Loadable for Result<T, PdfLoaderError> {
## Best Practices

1. **Error Handling**

- Use `ignore_errors()` for fault-tolerant processing
- Handle specific error types when needed
- Log errors appropriately

2. **Resource Management**

- Process files in batches
- Consider memory usage with large files
- Clean up temporary resources
Expand Down
1 change: 0 additions & 1 deletion pages/docs/integrations/model_providers/openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,6 @@ let embedder = client.embedding_model(openai::TEXT_EMBEDDING_3_LARGE);
1. **Tool Calling**: OpenAI models support function calling through a specialized JSON format. The provider automatically handles conversion between Rig's tool definitions and OpenAI's expected format.

2. **Response Processing**: The provider implements special handling for:

- Tool/function call responses
- System messages
- Token usage tracking
Expand Down
4 changes: 0 additions & 4 deletions pages/docs/integrations/vector_stores/in_memory.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -220,13 +220,11 @@ match store.get_document::<MyDoc>("doc1") {
## Best Practices

1. **Memory Management**:

- Monitor memory usage with large datasets
- Consider chunking large document additions
- Use cloud-based vector stores for production deployments

2. **Document Structure**:

- Keep documents serializable
- Avoid extremely large arrays
- Consider using custom ID generation for meaningful identifiers
Expand All @@ -239,13 +237,11 @@ match store.get_document::<MyDoc>("doc1") {
## Limitations

1. **Scalability**:

- Limited by available RAM
- No persistence between program runs
- Single-machine only

2. **Features**:

- No built-in indexing optimizations
- No metadata filtering
- No automatic persistence
Expand Down
4 changes: 0 additions & 4 deletions pages/docs/integrations/vector_stores/lancedb.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -206,12 +206,10 @@ impl RecordBatchDeserializer for Vec<RecordBatch> {
## Best Practices

1. **Index Creation**:

- Minimum of 256 rows required for IVF-PQ indexing
- Choose appropriate distance metrics based on your use case

2. **Schema Design**:

- Use appropriate data types for columns
- Consider embedding dimension requirements

Expand All @@ -222,12 +220,10 @@ impl RecordBatchDeserializer for Vec<RecordBatch> {
## Limitations and Considerations

1. **Data Size**:

- Local storage is suitable for smaller datasets
- Use cloud storage for large-scale deployments

2. **Index Requirements**:

- IVF-PQ index requires minimum dataset size
- Consider memory requirements for large indices

Expand Down
1 change: 0 additions & 1 deletion pages/docs/integrations/vector_stores/mongodb.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,6 @@ The collection must have a vector search index configured:
## Special Considerations

1. **Index Validation**: The implementation automatically validates:

- Index existence
- Vector dimensions
- Similarity metric
Expand Down
Loading
Loading