Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
f9198ff
Add Valkey support and parametrize tests for Redis and Valkey
kibertoad Oct 11, 2025
80ef53b
Fix linting issues in parametrized tests
kibertoad Oct 11, 2025
41f8fec
Add valkey-glide support with RedisClientAdapter
kibertoad Oct 11, 2025
902fc20
Complete valkey-glide integration with full adapter pattern
kibertoad Oct 12, 2025
37915c4
Add full valkey-glide pub/sub support to notification consumers
kibertoad Oct 12, 2025
5efea7e
Update tests to use real GlideClient for Valkey tests
kibertoad Oct 12, 2025
251614e
Implement full Lua script support via invokeScript adapter method
kibertoad Oct 12, 2025
085e334
Update RedisGroupCache tests to use GlideClient
kibertoad Oct 12, 2025
29ea936
feat: fix boolean conversion and add async pub/sub factory pattern
kibertoad Oct 12, 2025
cd72c22
feat: Implement valkey-glide pub/sub with multi-callback support
kibertoad Oct 12, 2025
9680847
fix: Resolve valkey-glide pub/sub message routing and error handling
kibertoad Oct 12, 2025
7e24d82
chore: Fix lint warnings and add core dumps to gitignore
kibertoad Oct 12, 2025
d6e961d
docs: Add comprehensive valkey-glide documentation
kibertoad Oct 12, 2025
2321c9c
fix: Add valkey-glide as optional peer dependency and fix TypeScript …
kibertoad Oct 12, 2025
2ea2436
fix: Fix TypeScript GlideString type error and update documentation s…
kibertoad Oct 12, 2025
ffed180
feat: Use native incr command with valkey-glide instead of Lua scripts
kibertoad Oct 12, 2025
aa439c6
feat: Use valkey-glide Batch API for atomic transactions instead of L…
kibertoad Oct 12, 2025
c76651e
fix: Change mset signature to accept flat string[] instead of Record
kibertoad Oct 12, 2025
c5fba25
docs: Add multi() API design rationale
kibertoad Oct 12, 2025
7d18390
fix: correct mset implementation and setMany execution
kibertoad Oct 12, 2025
33749f1
cleanup
kibertoad Dec 7, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -105,3 +105,4 @@ dist
.idea
package-lock.json
dist
core
124 changes: 124 additions & 0 deletions MULTI_API_DESIGN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
# Multi/Batch API Design Decision

## Current Implementation

The current `multi()` signature accepts an array of commands and executes them atomically:

```typescript
interface RedisClientInterface {
multi?(commands: any[][]): Promise<any>
}
```

### IoRedisClientAdapter
```typescript
async multi(commands: any[][]): Promise<any> {
return this.client.multi(commands).exec()
}
```

### ValkeyGlideClientAdapter
```typescript
async multi(commands: any[][]): Promise<any> {
const batch = new Batch(true) // atomic
// ...add commands to batch...
return this.client.exec(batch, true)
}
```

## Potential Alternative: Fluent/Chainable API

A fluent API would look like:

```typescript
interface RedisClientInterface {
multi?(): MultiPipeline
}

interface MultiPipeline {
incr(key: string): this
pexpire(key: string, ms: number): this
set(key: string, value: string, mode?: string, ttl?: number): this
exec(): Promise<any[]>
}
```

### Pros of Fluent API
- Matches ioredis native API more closely
- Allows incremental command building
- More flexible for complex scenarios

### Cons of Fluent API
- **Doesn't match valkey-glide's Batch API design**
- Batch is built declaratively, not fluently
- Would require creating a wrapper class for valkey-glide
- **Not needed for current use cases**
- All our usage builds command arrays first
- No need for incremental chaining in practice
- **More complex implementation**
- Need to maintain wrapper class state
- Need to handle differences between ioredis pipeline and Batch

## Current Usage Pattern

All current usage follows this pattern:

```typescript
// Build command array
const commands = [
['incr', key],
['pexpire', key, ttl],
]

// Execute atomically
await this.redis.multi(commands)
```

This pattern:
- ✅ Works identically for both clients
- ✅ Clear and explicit
- ✅ Easy to test
- ✅ No hidden state in wrapper objects

## Recommendation

**Keep the current array-based API** because:

1. **It works perfectly for our use cases** - We always build full command lists upfront
2. **It's simpler** - No need for wrapper classes or state management
3. **It's portable** - Works the same way for both ioredis and valkey-glide
4. **It's testable** - Easy to verify what commands will be executed
5. **It's explicit** - Caller sees all commands at once

## If Fluent API is Needed in Future

If we later need a fluent API, we can:

1. Keep `multi(commands[][])` for the declarative approach
2. Add `createPipeline()` or `createTransaction()` for fluent approach
3. Return wrapper class that implements MultiPipeline interface

This would allow both styles:

```typescript
// Declarative (current)
await redis.multi([['incr', key], ['pexpire', key, ttl]])

// Fluent (future if needed)
await redis.createTransaction()
.incr(key)
.pexpire(key, ttl)
.exec()
```

## Conclusion

The current implementation is **correct and appropriate** for our needs.

The array-based API:
- ✅ Matches our usage patterns
- ✅ Works consistently across both clients
- ✅ Is simple and maintainable
- ✅ Is easy to test and reason about

**No changes needed** to the multi() API at this time.
131 changes: 130 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,72 @@ const loader = new Loader<string>({
const classifier = await loader.get('1')
```

### Using Valkey-Glide (Alternative to ioredis)

Here's the same example using `@valkey/valkey-glide`:

```ts
import { GlideClient } from '@valkey/valkey-glide'
import { RedisCache, InMemoryCache } from 'layered-loader'
import type { DataSource } from 'layered-loader'

const valkeyClient = await GlideClient.createClient({
addresses: [{ host: 'localhost', port: 6379 }],
credentials: { password: 'sOmE_sEcUrE_pAsS' },
})

class ClassifiersDataSource implements DataSource<Record<string, any>> {
private readonly db: Knex
name = 'Classifiers DB loader'
isCache = false

constructor(db: Knex) {
this.db = db
}

async get(key: string): Promise<Record<string, any> | undefined | null> {
const results = await this.db('classifiers')
.select('*')
.where({
id: parseInt(key),
})
return results[0]
}

async getMany(keys: string[]): Promise<Record<string, any>[]> {
return this.db('classifiers').select('*').whereIn('id', keys.map(parseInt))
}
}

const loader = new Loader<string>({
// this cache will be checked first
inMemoryCache: {
cacheType: 'lru-map',
ttlInMsecs: 1000 * 60,
maxItems: 100,
},

// this cache will be checked if in-memory one returns undefined
asyncCache: new RedisCache<string>(valkeyClient, {
json: true, // this instructs loader to serialize passed objects as string and deserialize them back to objects
ttlInMsecs: 1000 * 60 * 10,
}),

// this will be used if neither cache has the requested data
dataSources: [new ClassifiersDataSource(db)],
})

// If cache is empty, but there is data in the DB, after this operation is completed, both caches will be populated
const classifier = await loader.get('1')
```

**Key differences with valkey-glide:**
- ✅ `createClient()` is **async** - use `await`
- ✅ `addresses` is an **array** - supports cluster mode
- ✅ `credentials` is an **object** - structured config

For complete migration instructions, see [docs/VALKEY_MIGRATION.md](docs/VALKEY_MIGRATION.md).

### Simplified loader syntax

It is also possible to inline datasource definition:
Expand Down Expand Up @@ -305,6 +371,58 @@ await userLoader.init() // this will ensure that consumers have definitely finis
await userLoader.invalidateCacheFor('key') // this will transparently invalidate cache across all instances of your application
```

#### Using Valkey-Glide for Notifications

The same notification setup works with `@valkey/valkey-glide`. Note that valkey-glide requires subscriptions to be configured at client creation:

```ts
import { GlideClient } from '@valkey/valkey-glide'
import { createNotificationPair, Loader, RedisCache } from 'layered-loader'

const CHANNEL = 'user-cache-notifications'

export type User = {
// some type
}

// Create clients with pub/sub configuration
const { publisher: notificationPublisher, consumer: notificationConsumer } = await createNotificationPair<User>({
channel: CHANNEL,
publisherRedis: {
addresses: [{ host: 'localhost', port: 6379 }],
credentials: { password: 'sOmE_sEcUrE_pAsS' },
},
consumerRedis: {
addresses: [{ host: 'localhost', port: 6379 }],
credentials: { password: 'sOmE_sEcUrE_pAsS' },
pubsubSubscriptions: {
channelsAndPatterns: {
0: new Set([CHANNEL]), // 0 = Exact mode for channel names
},
},
},
})

const valkeyCache = await GlideClient.createClient({
addresses: [{ host: 'localhost', port: 6379 }],
credentials: { password: 'sOmE_sEcUrE_pAsS' },
})

const userLoader = new Loader({
inMemoryCache: { ttlInMsecs: 1000 * 60 * 5 },
asyncCache: new RedisCache<User>(valkeyCache, {
ttlInMsecs: 1000 * 60 * 60,
}),
notificationConsumer,
notificationPublisher,
})

await userLoader.init()
await userLoader.invalidateCacheFor('key') // this will transparently invalidate cache across all instances
```

For more details on pub/sub configuration with valkey-glide, see [docs/VALKEY_MIGRATION.md](docs/VALKEY_MIGRATION.md#pubsub-notifications).

There is an equivalent for group loaders as well:

```ts
Expand Down Expand Up @@ -506,9 +624,20 @@ ToDo

## Provided async caches

### Supported Redis Clients

`layered-loader` supports two Redis clients through a transparent adapter pattern:

1. **ioredis** - Traditional Node.js Redis client
2. **@valkey/valkey-glide** - Modern Valkey client with Rust core (recommended for new projects)

Both clients work seamlessly with all Redis-based caches (`RedisCache`, `RedisGroupCache`) and notification systems. No code changes are needed when switching between them!

**Migration Guide:** See [docs/VALKEY_MIGRATION.md](docs/VALKEY_MIGRATION.md) for detailed migration instructions.

### RedisCache

`RedisCache` uses Redis for caching data, and is recommended for highly distributed systems. It requires an active instance of `ioredis`, and it does not perform any connection/disconnection operations on its own.
`RedisCache` uses Redis/Valkey for caching data, and is recommended for highly distributed systems. It requires an active instance of either `ioredis` or `@valkey/valkey-glide`, and it does not perform any connection/disconnection operations on its own.
It has following configuration options:

- `prefix: string` - what prefix should be added to all keys in this cache. Used to differentiate among different groups of entities within single Redis DB (serving as a pseudo-table);
Expand Down
Loading
Loading