High‑quality experimental patterns & decorators built on top of IMemoryCache (Microsoft.Extensions.Caching.Memory) to address common performance and correctness concerns.
- Components Overview
- Quick Start
- MeteredMemoryCache
- Implementation Details
- Choosing an Approach
- Benchmarks & Performance
- Documentation
- Testing
- License
| Component | Purpose | Concurrency Control | Async Support | Extra Features |
|---|---|---|---|---|
MeteredMemoryCache |
Emits OpenTelemetry / .NET System.Diagnostics.Metrics counters for hits, misses, evictions |
Thread-safe atomic counter operations with dimensional tags | N/A (sync like base cache) | Named caches, custom tags, GetCurrentStatistics(), service collection extensions, options pattern validation |
These implementations favor clarity & demonstrable patterns over feature breadth. They are intentionally small and suitable as a starting point for production adaptation.
Add the project (or copy the desired file) into your solution and reference it from your application. Example using the metered cache with DI:
builder.Services.AddMemoryCache();
builder.Services.AddNamedMeteredMemoryCache("user-cache");For single-flight scenarios, we recommend using these mature, production-ready solutions instead of implementing your own:
- First-party solution from Microsoft with built-in cache stampede protection
- L1 + L2 cache support (in-memory + distributed)
- Cache invalidation with tags for bulk operations
- Simple API - reduces complex cache-aside patterns to a single line
- Performance optimizations including support for
IBufferDistributedCache - Secure by default with authentication and data handling
// Simple usage with HybridCache
public class SomeService(HybridCache cache)
{
public async Task<SomeInformation> GetSomeInformationAsync(string name, int id, CancellationToken token = default)
{
return await cache.GetOrCreateAsync(
$"someinfo:{name}:{id}",
async cancel => await SomeExpensiveOperationAsync(name, id, cancel),
token: token
);
}
}- Mature OSS library with comprehensive single-flight support
- Request coalescing - only one factory runs per key concurrently
- Rich feature set: soft/hard timeouts, fail-safe, eager refresh, backplane
- Excellent documentation and active maintenance
- Supports older .NET versions down to .NET Framework 4.7.2
// Simple usage with FusionCache
public class SomeService(FusionCache cache)
{
public async Task<SomeInformation> GetSomeInformationAsync(string name, int id, CancellationToken token = default)
{
return await cache.GetOrSetAsync(
$"someinfo:{name}:{id}",
async cancel => await SomeExpensiveOperationAsync(name, id, cancel),
TimeSpan.FromMinutes(5),
token
);
}
}- Greenfield or .NET 9+: Use HybridCache - first-party, GA, built-in stampede protection
- Need richer features or .NET < 9: Use FusionCache - comprehensive feature set, excellent documentation
Recording metrics with MeteredMemoryCache:
var meter = new Meter("app.cache");
var metered = new MeteredMemoryCache(new MemoryCache(new MemoryCacheOptions()), meter);
metered.Set("answer", 42);
if (metered.TryGet<int>("answer", out var v)) { /* use v */ }
// Get real-time statistics
var stats = metered.GetCurrentStatistics();
Console.WriteLine($"Hit ratio: {stats.HitRatio:F2}%");Instruments exposed:
cache.requests(ObservableCounter, tags:cache.name,cache.request.type=hitormiss)cache.evictions(ObservableCounter, tag:cache.name)cache.entries(ObservableUpDownCounter, tag:cache.name)cache.estimated_size(ObservableGauge, emitted when inner cache has statistics tracking enabled viaTrackStatistics)
Consume with MeterListener, OpenTelemetry Metrics SDK, or any compatible exporter.
The MeteredMemoryCache provides comprehensive observability for cache operations through OpenTelemetry metrics integration. It decorates any IMemoryCache implementation with zero-configuration metrics emission.
// Register with dependency injection
builder.Services.AddNamedMeteredMemoryCache("user-cache");
// Configure OpenTelemetry
builder.Services.AddOpenTelemetry()
.WithMetrics(metrics => metrics
.AddMeter("Microsoft.Extensions.Caching.Memory.MemoryCache")
.AddOtlpExporter());- Named Cache Support: Dimensional metrics with
cache.nametags - Service Collection Extensions: Easy DI integration
- Options Pattern: Configurable behavior with validation
- Minimal Overhead: Uses atomic
Interlockedoperations for thread-safe counting - Thread-Safe: Lock-free atomic operations
- Real-time Statistics:
GetCurrentStatistics()for immediate metrics access
| Metric | Type | Description | Tags |
|---|---|---|---|
cache.requests |
ObservableCounter | Cache lookup operations | cache.name, cache.request.type |
cache.evictions |
ObservableCounter | Cache evictions | cache.name |
cache.entries |
ObservableUpDownCounter | Current entry count | cache.name |
cache.estimated_size |
ObservableGauge | Estimated cache size | cache.name |
For detailed usage, configuration, and examples, see the MeteredMemoryCache Usage Guide.
- Adds minimal instrumentation overhead (~1 atomic increment per op) while preserving
IMemoryCacheAPI. - Uses
Interlockedatomic operations for thread-safe, lock-free counting. - Eviction metric is emitted from a post‑eviction callback automatically registered on each created entry.
- Provides
GetCurrentStatistics()for real-time metrics access, similar toMemoryCache.GetCurrentStatistics(). - Includes convenience
TryGet<T>&GetOrCreate<T>wrappers emitting structured counters. - Use when you need visibility (hit ratio, churn) without adopting a full external caching layer.
| Scenario | Recommended |
|---|---|
| Need metrics (hit ratio, eviction counts) with minimal overhead | MeteredMemoryCache |
| Need single-flight (cache stampede protection) for .NET 9+ | Microsoft HybridCache |
| Need single-flight with richer features or .NET < 9 | FusionCache |
| Component | Cancellation Behavior | Failure Behavior |
|---|---|---|
| MeteredMemoryCache | N/A (no async). | Eviction reasons recorded regardless; atomic counters remain consistent. |
| HybridCache | See HybridCache documentation | See HybridCache documentation |
| FusionCache | See FusionCache documentation | See FusionCache documentation |
Benchmarks (BenchmarkDotNet) included under tests/Benchmarks compare relative overhead of wrappers. To run:
dotnet run -c Release -p tests/Benchmarks/Benchmarks.csprojInterpretation guidance:
MeteredMemoryCacheuses atomicInterlockedoperations for <5% overhead vs rawMemoryCache.
Always benchmark within your workload; microbenchmarks do not capture memory pressure, GC, or production contention levels.
The repository includes a lightweight regression gate comparing the latest BenchmarkDotNet run against committed baselines.
Quick local workflow:
dotnet run -c Release --project tests/Benchmarks/Benchmarks.csproj --filter *CacheBenchmarks*
Copy-Item BenchmarkDotNet.Artifacts/results/Benchmarks.CacheBenchmarks-report-full.json BenchmarkDotNet.Artifacts/results/current.json
dotnet run -c Release --project tools/BenchGate/BenchGate.csproj -- benchmarks/baseline/CacheBenchmarks.json BenchmarkDotNet.Artifacts/results/current.jsonThresholds (defaults):
- Time regression: >3% AND >5 ns absolute
- Allocation regression: increase >16 B AND >3%
Update baseline only after a verified improvement:
Copy-Item BenchmarkDotNet.Artifacts/results/Benchmarks.CacheBenchmarks-report-full.json benchmarks/baseline/CacheBenchmarks.json
git add benchmarks/baseline/CacheBenchmarks.json
git commit -m "chore(bench): update CacheBenchmarks baseline" -m "Include before/after metrics table"CI runs the gate automatically (see .github/workflows/ci.yml).
BenchGate compares the latest BenchmarkDotNet full JSON output(s) against committed baselines under benchmarks/baseline/.
Supported CLI flags:
--suite=<SuiteName>: Explicit suite name if not inferrable.--time-threshold=<double>: Relative mean time regression guard (default 0.03).--alloc-threshold-bytes=<int>: Absolute allocation regression guard (default 16).--alloc-threshold-pct=<double>: Relative allocation regression guard (default 0.03).--sigma-mult=<double>: Sigma multiplier for statistical significance (default 2.0).--no-sigma: Disable significance filtering (treat all deltas as significant subject to thresholds).
Per‑OS baseline resolution order when first argument is a directory:
<Suite>.<os>.<arch>.json<Suite>.<os>.json<Suite>.json
Current baselines (Windows):
CacheBenchmarks.windows-latest.jsonContentionBenchmarks.windows-latest.json
Add additional OS baselines by copying the corresponding *-report-full.json into the baseline directory using the naming convention above.
Evidence & Process requirements are described in .github/copilot-instructions.md Sections 12–14.
- Enrich metrics (e.g., object size, latency histogram for factory execution).
- Add negative caching (cache specific failures briefly) if upstream calls are very costly.
- Provide a multi-layer (L1 in-memory + L2 distributed) single-flight composition.
Comprehensive guides and references are available in the docs/ directory:
- MeteredMemoryCache Usage Guide - Complete usage documentation with examples
- OpenTelemetry Integration - Setup guide for various OTel exporters
- Multi-Cache Scenarios - Patterns for managing multiple named caches
- Performance Characteristics - Detailed benchmark analysis and optimization guidance
- Troubleshooting Guide - Common issues and solutions
- API Reference - Complete API documentation with examples
- Getting Started: See Quick Start above
- Performance Impact: Performance Characteristics
- Common Issues: Troubleshooting Guide
- Advanced Patterns: Multi-Cache Scenarios
Unit tests cover: metrics emission, cache operations, and thread safety. See the tests/Unit directory for usage patterns.
MIT (see LICENSE.txt).
These are illustrative implementations. Review thread-safety, memory usage, eviction policies, and failure handling for your production context (high cardinality keys, very large payloads, process restarts, etc.).