Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Placeholder environment variables for local documentation/examples
# Replace with real values before running database-backed samples.
DATABASE_URL=postgres://postgres:postgres@localhost:5432/rustapi_dev
REDIS_URL=redis://127.0.0.1:6379
OAUTH_CLIENT_ID=replace-me
OAUTH_CLIENT_SECRET=replace-me
OAUTH_REDIRECT_URI=http://127.0.0.1:3000/auth/callback
OIDC_ISSUER_URL=https://accounts.google.com
Copy link

Copilot AI Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Committing a root .env file can cause accidental leakage if developers later add real secrets, and it can also unexpectedly affect local runs/CI that auto-load .env. Consider renaming this to .env.example (tracked) and adding .env to .gitignore, while keeping the docs pointing at the example file.

Suggested change
OIDC_ISSUER_URL=https://accounts.google.com
OIDC_ISSUER_URL=replace-me

Copilot uses AI. Check for mistakes.
7 changes: 6 additions & 1 deletion .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,13 @@ jobs:
- name: Run Benchmarks
run: cargo bench --workspace | tee benchmark_results.txt

- name: Run Performance Snapshot
run: cargo run -p rustapi-core --example perf_snapshot --release | tee perf_snapshot.txt

- name: Upload Benchmark Results
uses: actions/upload-artifact@v4
with:
name: benchmark-results
path: benchmark_results.txt
path: |
benchmark_results.txt
perf_snapshot.txt
8 changes: 8 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,3 +12,11 @@ assets/b9c93c1cd427d8f50e68dbd11ed2b000.jpg

docs/cookbook/book/
build_rs_cov.profraw
answer.md
docs/PRODUCTION_BASELINE.md
docs/PRODUCTION_CHECKLIST.md
/.github/instructions
/.github/prompts
/.github/skills
*.md
tasks.md
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

This release delivers a **12x performance improvement**, bringing RustAPI from ~8K req/s to **~92K req/s**.

> Note: the numbers below are preserved as a **historical release snapshot**. Current benchmark methodology and canonical public performance claims are maintained in `docs/PERFORMANCE_BENCHMARKS.md`.

#### Benchmark Results

| Framework | Requests/sec | Latency (avg) |
Expand Down
54 changes: 53 additions & 1 deletion Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

71 changes: 63 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ RustAPI ships circuit breaker and retry middleware as first-class features, not
- **Retry** with exponential backoff
- **Rate Limiting** (IP-based, per-route)
- **Body Limit** with configurable max size (default 1 MB)
- **Health Probes** via `.health_endpoints()` for `/health`, `/ready`, and `/live`

### Environment-Aware Error Masking

Expand All @@ -67,21 +68,32 @@ All error responses include a unique `error_id` (`err_{uuid}`) for log correlati
Record and replay HTTP request/response pairs for production debugging:

```rust
use rustapi_rs::extras::replay::{ReplayConfig, ReplayLayer};
use rustapi_rs::prelude::*;

RustApi::new()
.layer(ReplayLayer::new(store, config))
.run("0.0.0.0:8080").await;
.layer(
ReplayLayer::new(
ReplayConfig::new()
.enabled(true)
.admin_token("local-replay-token"),
),
)
.run("0.0.0.0:8080")
.await?;
```

```sh
cargo rustapi replay list
cargo rustapi replay run <id> --target http://localhost:8080
cargo rustapi replay diff <id> --target http://staging
cargo rustapi replay list -t local-replay-token
cargo rustapi replay run <id> -t local-replay-token --target http://localhost:8080
cargo rustapi replay diff <id> -t local-replay-token --target http://staging
```

- Middleware-based recording; no application code changes
- Sensitive header redaction; disabled by default
- In-memory (dev) or filesystem (production) storage with TTL
- `ReplayClient` for programmatic test automation
- Full incident workflow: [`docs/cookbook/src/recipes/replay.md`](docs/cookbook/src/recipes/replay.md)

### Dual-Stack HTTP/1.1 + HTTP/3

Expand Down Expand Up @@ -121,7 +133,7 @@ Run HTTP/1.1 (TCP) and HTTP/3 (QUIC/UDP) simultaneously on the same server. Enab

| Feature | RustAPI | Actix-web | Axum | FastAPI (Python) |
|:--------|:-------:|:---------:|:----:|:----------------:|
| Performance | ~92k req/s | ~105k | ~100k | ~12k |
| Performance | See benchmark source | Workload-dependent | Workload-dependent | Workload-dependent |
| Ergonomics | High | Low | Medium | High |
| AI/LLM native format (TOON) | Yes | No | No | No |
| Request replay / time-travel debug | Built-in | No | No | 3rd-party |
Expand All @@ -132,6 +144,8 @@ Run HTTP/1.1 (TCP) and HTTP/3 (QUIC/UDP) simultaneously on the same server. Enab
| Background jobs | Built-in | 3rd-party | 3rd-party | 3rd-party |
| API stability model | Facade + CI contract | Direct | Direct | Stable |

Current benchmark methodology and canonical published performance claims live in [`docs/PERFORMANCE_BENCHMARKS.md`](docs/PERFORMANCE_BENCHMARKS.md). Historical point-in-time numbers in older release notes should not be treated as the current baseline unless they are linked from that document.

## Quick Start

```rust
Expand All @@ -146,13 +160,52 @@ async fn hello(Path(name): Path<String>) -> Json<Message> {
}

#[rustapi_rs::main]
async fn main() {
RustApi::auto().run("127.0.0.1:8080").await
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::auto().run("127.0.0.1:8080").await
}
```

`RustApi::auto()` collects all macro-annotated handlers, generates OpenAPI documentation (served at `/docs`), and starts a multi-threaded tokio runtime.

For production deployments, you can enable standard probe endpoints without writing handlers manually:

```rust
use rustapi_rs::prelude::*;

#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
let health = HealthCheckBuilder::new(true)
.add_check("database", || async { HealthStatus::healthy() })
.build();

RustApi::auto()
.with_health_check(health)
.run("127.0.0.1:8080")
.await
}
```

This registers:
- `/health` — aggregate dependency health
- `/ready` — readiness probe (`503` when dependencies are unhealthy)
- `/live` — lightweight liveness probe

Or use a single production baseline preset:

```rust
use rustapi_rs::prelude::*;

#[rustapi_rs::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error + Send + Sync>> {
RustApi::auto()
.production_defaults("users-api")
.run("127.0.0.1:8080")
.await
}
```

`production_defaults()` enables request IDs, tracing spans, and standard probe endpoints in one call.

You can shorten the macro prefix by renaming the crate:

```toml
Expand Down Expand Up @@ -202,6 +255,8 @@ Detailed architecture, recipes, and guides are in the [Cookbook](docs/cookbook/s
- [System Architecture](docs/cookbook/src/architecture/system_overview.md)
- [Performance Benchmarks](docs/cookbook/src/concepts/performance.md)
- [gRPC Integration Guide](docs/cookbook/src/crates/rustapi_grpc.md)
- [Recommended Production Baseline](docs/PRODUCTION_BASELINE.md)
- [Production Checklist](docs/PRODUCTION_CHECKLIST.md)
- [Examples](crates/rustapi-rs/examples/)

---
Expand Down
2 changes: 2 additions & 0 deletions RELEASES.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@
**Release Date**: February 26, 2026
**Full Changelog**: https://github.com/Tuntii/RustAPI/compare/v0.1.335...v0.1.397

**Benchmark Source of Truth**: Current benchmark methodology and canonical performance claims live in `docs/PERFORMANCE_BENCHMARKS.md`. Historical release-specific benchmark notes should be treated as point-in-time snapshots unless they are linked from that document.

---

## 🎯 Highlights
Expand Down
Loading
Loading