Skip to content

.pr_agent_accepted_suggestions

qodo-merge-bot edited this page Apr 17, 2026 · 17 revisions
                     PR 5562 (2026-03-23)                    
[reliability] V2 engine stop race
V2 engine stop race CoreProjectionV2 cancels and disposes the engine CTS without awaiting the ProjectionEngineV2 task, then clears `_engine`. A rapid Stop/Start can run multiple engines concurrently, causing overlapping checkpoint/emitted writes and inconsistent projection state.

Issue description

StopEngine() cancels the engine but does not wait for it to stop, so restarting can overlap engine instances.

Issue Context

ProjectionEngineV2 performs async draining/checkpointing and waits for partition tasks in its finally; callers must not start a second engine instance until the first has completed.

Fix Focus Areas

  • Store and await the engine run task during Stop/Kill/Suspend (ideally off the projection worker thread if deadlock risk exists).
  • Gate Start so it cannot run until prior engine completion is observed.

Fix Focus Areas (code locations)

  • src/KurrentDB.Projections.V2/Services/Processing/V2/CoreProjectionV2.cs[215-236]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/CoreProjectionV2.cs[278-285]


                     PR 5544 (2026-03-07)                    
[correctness] Engine v2 always faults
Engine v2 always faults EngineVersion=2 is externally selectable, but the selected V2ProjectionProcessingStrategy throws in CreateProcessingPhases, so projection creation will fault immediately in the existing V1 CoreProjection pipeline.

Issue description

EngineVersion=2 can be requested through the management API, but the code path still constructs a v1 CoreProjection which always calls CreateProcessingPhases(). The v2 strategy currently throws, so projections will fault immediately.

Issue Context

This is user-triggerable via gRPC create options, and will lead to immediate projection failures when engine v2 is selected.

Fix Focus Areas

  • src/KurrentDB.Projections.Management/Services/Grpc/ProjectionManagement.Create.cs[60-71]
  • src/KurrentDB.Projections.V1/Services/Processing/Strategies/ProcessingStrategySelector.cs[22-35]
  • src/KurrentDB.Projections.V1/Services/Processing/V2/V2ProjectionProcessingStrategy.cs[50-63]
  • src/KurrentDB.Projections.V1/Services/Processing/CoreProjection.cs[121-132]

[correctness] StateHandler shared concurrently
StateHandler shared concurrently ProjectionEngineV2 passes the same IProjectionStateHandler instance into multiple PartitionProcessor tasks, causing unsafe concurrent access and state corruption (especially for JintProjectionStateHandler).

Issue description

ProjectionEngineV2 starts N partition processors concurrently but passes the same IProjectionStateHandler instance to all of them. Projection state handlers maintain internal mutable state, so concurrent calls will corrupt state and/or crash.

Issue Context

This is especially problematic for JS projections (JintProjectionStateHandler) which encapsulate a single Jint Engine and mutable _state.

Fix Focus Areas

  • src/KurrentDB.Projections.V2/Services/Processing/V2/ProjectionEngineV2.cs[75-89]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/ProjectionEngineV2Config.cs[9-18]
  • src/KurrentDB.Projections.JavaScript/Services/Interpreted/JintProjectionStateHandler.cs[35-70]

[correctness] BiState shared state races
BiState shared state races BiState shared state is tracked per PartitionProcessor, so with multiple partitions the shared state will be updated concurrently from stale snapshots and written multiple times, losing updates and producing incorrect shared aggregates.

Issue description

BiState shared state is handled locally per partition processor, but shared state is conceptually global. With multiple partitions this produces stale reads and lost updates.

Issue Context

Each partition writes to the same shared result stream name ($projections-{name}--result) without coordination.

Fix Focus Areas

  • src/KurrentDB.Projections.V2/Services/Processing/V2/PartitionProcessor.cs[94-101]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/PartitionProcessor.cs[127-131]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/ProjectionEngineV2.cs[75-89]

[correctness] Custom partition routing mismatch
Custom partition routing mismatch Deferred partition-key mode routes events by EventStreamId but later computes the actual custom partition key; custom partitions can group across streams, so the same computed partition can be processed on different processors, splitting/corrupting per-partition state.

Issue description

Deferred partitioning routes by stream id but the real partition key is computed later. Custom partitioning can group across streams (e.g., event.body.region), so per-partition state can be split across processors.

Issue Context

Each processor has its own state cache; there is no cross-processor coordination for a computed partition key.

Fix Focus Areas

  • src/KurrentDB.Projections.V2/Services/Processing/V2/ProjectionEngineV2.cs[52-67]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/PartitionDispatcher.cs[86-92]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/PartitionProcessor.cs[70-92]
  • src/KurrentDB.Projections.Core.Tests/Services/Jint/when_partitioning_by_custom_rule.cs[15-33]

[correctness] Checkpoint sequence reset drops
Checkpoint sequence reset drops CheckpointCoordinator discards partially collected buffers when it sees a new markerSequence. Since the read loop can inject marker N+1 before all partitions reported marker N, checkpoint data can be dropped, causing lost state/emits or duplication on restart.

Issue description

Checkpoint collection is single-slot and resets on any new marker sequence. If partitions report out of step, earlier checkpoint buffers are discarded.

Issue Context

The read loop injects markers based on thresholds, independent of whether the prior marker has fully completed across all partitions.

Fix Focus Areas

  • src/KurrentDB.Projections.V2/Services/Processing/V2/CheckpointCoordinator.cs[49-65]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/ProjectionEngineV2.cs[138-147]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/PartitionDispatcher.cs[95-105]

[correctness] Checkpoint write can hang
Checkpoint write can hang CheckpointCoordinator awaits a TaskCompletionSource without cancellation/timeout; if the write response never arrives, checkpointing stalls indefinitely and the semaphore remains held, blocking progress and potentially shutdown.

Issue description

Checkpoint writes can block forever waiting for a reply message; this stalls all subsequent checkpoints and can hang shutdown.

Issue Context

This is a reliability issue under partial failures (dropped messages, node failover, etc.).

Fix Focus Areas

  • src/KurrentDB.Projections.V2/Services/Processing/V2/CheckpointCoordinator.cs[67-112]

[correctness] Null passed to Load
Null passed to Load PartitionProcessor caches null states and later calls _stateHandler.Load(cachedState) even when cachedState is null. This violates the IProjectionStateHandler contract and can crash handlers that don’t accept null.

Issue description

PartitionProcessor may call Load(null) due to null-state caching, but the handler interface does not promise null is accepted.

Issue Context

Null states appear to be part of projection semantics (tests handle null expected states), so this should be made an explicit contract.

Fix Focus Areas

  • src/KurrentDB.Projections.V2/Services/Processing/V2/PartitionProcessor.cs[87-92]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/PartitionProcessor.cs[119-125]
  • src/KurrentDB.Projections.Shared/Services/IProjectionStateHandler.cs[18-37]

[correctness] EmitEnabled flag unused
EmitEnabled flag unused ProjectionEngineV2Config exposes EmitEnabled, but emitted events are buffered and written unconditionally; this can allow side-effecting emits even when emit is intended to be disabled by configuration.

Issue description

EmitEnabled is present in the V2 engine config but not enforced, while the engine always writes emitted events.

Issue Context

The management layer already treats emit enablement as a configuration flag, so V2 should provide equivalent enforcement to avoid unexpected side effects.

Fix Focus Areas

  • src/KurrentDB.Projections.V2/Services/Processing/V2/ProjectionEngineV2Config.cs[9-18]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/PartitionProcessor.cs[110-135]
  • src/KurrentDB.Projections.V2/Services/Processing/V2/CheckpointCoordinator.cs[139-150]
  • src/KurrentDB.Projections.Management/Services/Management/ManagedProjection.cs[953-983]


                     PR 5459 (2026-01-14)                    
[possible issue] Prevent crashes from unhandled exceptions

✅ Prevent crashes from unhandled exceptions

Add a try-catch block inside the async void method AuthorizeManyAsync to prevent unhandled exceptions from await accessCheck from crashing the application.

src/KurrentDB.Core/Services/AuthorizationGateway.cs [422-434]

 async void AuthorizeManyAsync<TRequest>(
 	ValueTask<bool> accessCheck,
 	ClaimsPrincipal user,
 	ReadOnlyMemory<Operation> operations,
 	IEnvelope replyTo,
 	IPublisher destination,
 	TRequest request,
 	Func<TRequest, Message> createAccessDenied) where TRequest : Message {
-	if (await accessCheck)
-		AuthorizeMany(user, operations, replyTo, destination, request, createAccessDenied);
-	else
+	try {
+		if (await accessCheck)
+			AuthorizeMany(user, operations, replyTo, destination, request, createAccessDenied);
+		else
+			replyTo.ReplyWith(createAccessDenied(request));
+	} catch (Exception ex) {
+		// It's important to log the exception. Assuming a logger is available.
+		// Log.Error(ex, "Error during asynchronous authorization for multi-stream write.");
 		replyTo.ReplyWith(createAccessDenied(request));
+	}
 }

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that an unhandled exception in an async void method can crash the application and proposes a try-catch block to handle potential exceptions from await accessCheck gracefully.



                     PR 5452 (2026-01-13)                    
[possible issue] Pass correct affinity parameter

✅ Pass correct affinity parameter

Pass message.Affinity instead of the message object to Strategy.GetSynchronizationGroup to ensure correct affinity-based message grouping.

src/KurrentDB.Core/Bus/ThreadPoolMessageScheduler.cs [131]

-stateMachine.Schedule(message, Strategy.GetSynchronizationGroup(message));
+stateMachine.Schedule(message, Strategy.GetSynchronizationGroup(message.Affinity));

Suggestion importance[1-10]: 8

__

Why: This is a critical bug fix. The current implementation passes the entire message object to GetSynchronizationGroup, which uses object reference equality, defeating the purpose of affinity-based grouping and causing incorrect synchronization behavior.


[possible issue] Prevent potential unregistration error

✅ Prevent potential unregistration error

Conditionally unregister from the Monitor in RequestStop only if a _queueLengthListener exists to prevent a potential exception.

src/KurrentDB.Core/Bus/ThreadPoolMessageScheduler.cs [84-91]

 	public void RequestStop() {
 		if (Interlocked.Exchange(ref _lifetimeSource, null) is { } cts) {
 			cts.Cancel();
-			Monitor.Unregister(this);
-			_queueLengthListener?.Dispose();
-			_queueLengthObserver = null;
+			if (_queueLengthListener is not null) {
+				Monitor.Unregister(this);
+				_queueLengthListener.Dispose();
+				_queueLengthObserver = null;
+			}
 		}
 	}

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a potential ArgumentException when unregistering from the Monitor if the queue was never registered, which happens with certain strategies.


[high-level] Consider simplifying the custom object pooling

✅ Consider simplifying the custom object pooling

The custom lock-free object pooling for AsyncStateMachine adds complexity. Consider reverting to the simpler, previous implementation that used System.Collections.Concurrent.ConcurrentBag to reduce maintenance overhead and risk.

Examples:

src/KurrentDB.Core/Bus/ThreadPoolMessageScheduler.Pooling.cs [9-32]

	private volatile AsyncStateMachine _firstNode;

	private void ReturnToPool(AsyncStateMachine node) {
		AsyncStateMachine current;
		do {
			current = _firstNode;
			node.NextInPool = current;
		} while (Interlocked.CompareExchange(ref _firstNode, node, current) != current);
	}


 ... (clipped 14 lines)

src/KurrentDB.Core/Bus/ThreadPoolMessageScheduler.cs [121-123]

		var stateMachine = messageCount > MaxPoolSize
			? new(this)
			: RentFromPool();

Solution Walkthrough:

Before:

// src/KurrentDB.Core/Bus/ThreadPoolMessageScheduler.cs
public partial class ThreadPoolMessageScheduler : IQueuedHandler {
    private readonly ConcurrentBag _pool;
    // ...
    public void Publish(Message message) {
        // ...
        AsyncStateMachine stateMachine;
        if (!_pool.TryTake(out stateMachine)) {
            stateMachine = new PoolingAsyncStateMachine(this);
        }
        // ...
    }
}

// src/KurrentDB.Core/Bus/ThreadPoolMessageScheduler.StateMachine.cs
private class AsyncStateMachine {
    // ...
    protected void ReturnToPool() => _scheduler._pool.Add(this);
}

After:

// src/KurrentDB.Core/Bus/ThreadPoolMessageScheduler.cs
public partial class ThreadPoolMessageScheduler : IQueuedHandler {
    // ...
    public void Publish(Message message) {
        // ...
        var stateMachine = messageCount > MaxPoolSize
            ? new(this)
            : RentFromPool();
        // ...
    }
}

// src/KurrentDB.Core/Bus/ThreadPoolMessageScheduler.Pooling.cs
partial class ThreadPoolMessageScheduler {
    private volatile AsyncStateMachine _firstNode;

    private void ReturnToPool(AsyncStateMachine node) {
        // lock-free push to linked list
        do { ... } while (Interlocked.CompareExchange(...) != ...);
    }

    private AsyncStateMachine RentFromPool() {
        // lock-free pop from linked list
        do { ... } while (Interlocked.CompareExchange(...) != ...);
        return current;
    }
}

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies the replacement of ConcurrentBag with a custom lock-free pooling implementation, raising a valid concern about increased complexity and risk in a critical component for a performance gain that should be justified.



                     PR 5445 (2026-01-12)                    
[general] Improve timeout handling in tests

✅ Improve timeout handling in tests

In ProcessConnectorEvents, replace DateTime.UtcNow with a Stopwatch for more reliable timeout measurement and throw a TimeoutException if the connector is not found, instead of returning silently.

src/Connectors/KurrentDB.Connectors.Tests/Planes/Management/ManagementServerFixture.cs [41-60]

 public async Task ProcessConnectorEvents(string connectorId, CancellationToken cancellationToken = default) {
 	// Wait for the background projection to process events for this connector.
 	// We poll until the connector appears in the snapshot to avoid race conditions
 	// with the background ConnectorsStateProjection service.
 	var timeout = TimeSpan.FromSeconds(5);
-	var start   = DateTime.UtcNow;
+	var stopwatch = System.Diagnostics.Stopwatch.StartNew();
 
-	while (DateTime.UtcNow - start < timeout) {
+	while (stopwatch.Elapsed < timeout) {
 		cancellationToken.ThrowIfCancellationRequested();
 
 		var (snapshot, _, _) = await AssemblyFixture.SnapshotProjectionsStore.LoadSnapshot<ConnectorsSnapshot>(
 			ConnectorQueryConventions.Streams.ConnectorsStateProjectionStream
 		);
 
 		if (snapshot.Connectors.Any(c => c.ConnectorId == connectorId))
 			return;
 
 		await Task.Delay(50, cancellationToken);
 	}
+	
+	throw new TimeoutException($"Connector with ID '{connectorId}' did not appear in the snapshot within the {timeout.TotalSeconds}s timeout.");
 }

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly points out two significant issues in the test helper method: using DateTime.UtcNow for timeouts and silent failure. The proposed changes to use Stopwatch and throw a TimeoutException substantially improve test reliability and debuggability.



                     PR 5444 (2026-01-12)                    
[possible issue] Avoid converting UTC time to local

✅ Avoid converting UTC time to local

Avoid converting the created timestamp to local time. Instead, format the UTC DateTime directly using the ISO 8601 round-trip format specifier (o) to ensure timezone consistency.

src/KurrentDB.Core/DuckDB/InlineFunctions.cs [47]

-			$"{{ \"data\": {dataString}, \"metadata\": {metaString}, \"stream_id\": \"{stream}\", \"created\": \"{created.ToLocalTime():yyyy-MM-dd'T'HH:mm:ssK}\", \"event_type\": \"{eventType}\" }}";
+			$"{{ \"data\": {dataString}, \"metadata\": {metaString}, \"stream_id\": \"{stream}\", \"created\": \"{created:o}\", \"event_type\": \"{eventType}\" }}";

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that converting UTC time to local server time is not a best practice and can lead to timezone-related issues. Proposing to use the round-trip format specifier (o) on the original UTC DateTime is a robust solution that improves data consistency.


[possible issue] Cast JSON timestamp to TIMESTAMP

✅ Cast JSON timestamp to TIMESTAMP

In the AllCteTemplate query, cast the event->>'created' string value to a TIMESTAMP type to ensure correct sorting and filtering.

src/KurrentDB/Components/Query/QueryService.cs [97]

-event->>'created' as created_at
+CAST(event->>'created' AS TIMESTAMP) as created_at

Suggestion importance[1-10]: 8

__

Why: The PR changes the created_at column to be a string, which would break any sorting or filtering operations that expect a timestamp. This suggestion correctly points out the need to cast this string back to a TIMESTAMP type to maintain query functionality.



                     PR 5437 (2026-01-06)                    
[high-level] Consider a more generic multi-append API

✅ Consider a more generic multi-append API

The new multi-stream append feature uses a low-level, index-based API. It is suggested to create a more abstract, higher-level API to improve safety and ease of use for future implementations.

Examples:

src/KurrentDB.Core/Bus/Extensions/PublisherWriteExtensions.cs [45-52]

	public static async Task WriteEvents(
		this IPublisher publisher,
		LowAllocReadOnlyMemory streams,
		LowAllocReadOnlyMemory expectedRevisions,
		LowAllocReadOnlyMemory events,
		LowAllocReadOnlyMemory eventStreamIndexes,
		CancellationToken cancellationToken = default
	) {

src/KurrentDB.SecondaryIndexing/Indexes/User/Management/UserIndexEventStore.cs [46-102]

		LowAllocReadOnlyMemory streams = duplicate
			? [stream, UserIndexConstants.ManagementAllStream]
			: new(stream);

		LowAllocReadOnlyMemory expectedVersions = duplicate
			? [expectedVersion.Value, ExpectedVersion.Any]
			: new(expectedVersion.Value);

		var totalEventCount = duplicate ? events.Count * 2 : events.Count;
		Event[] processedEvents = new Event[totalEventCount];

 ... (clipped 47 lines)

Solution Walkthrough:

Before:

// In PublisherWriteExtensions.cs
public static async Task WriteEvents(
    this IPublisher publisher,
    LowAllocReadOnlyMemory streams,
    LowAllocReadOnlyMemory expectedRevisions,
    LowAllocReadOnlyMemory events,
    LowAllocReadOnlyMemory eventStreamIndexes,
    CancellationToken cancellationToken = default
) { ... }

// Usage in UserIndexEventStore.cs
// ... build parallel arrays for streams, expectedVersions, events, and eventStreamIndexes
await _client.Writing.WriteEvents(
    streams: streams,
    expectedRevisions: expectedVersions,
    events: processedEvents,
    eventStreamIndexes: eventStreamIndexes,
    cancellationToken);

After:

// Proposed higher-level API
public record MultiStreamWrite(
    string Stream,
    long ExpectedRevision,
    IReadOnlyCollection Events
);

public static async Task<...> WriteEvents(
    this IPublisher publisher,
    IReadOnlyCollection writes,
    CancellationToken cancellationToken = default
) {
    // Internally, this would build the low-level parallel arrays
    // and call the existing implementation.
}

// Simplified usage
var writes = new List { new(stream, expectedVersion, events) };
if (duplicate) {
    writes.Add(new(allStream, ExpectedVersion.Any, events));
}
await _client.Writing.WriteEvents(writes, cancellationToken);

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies that the new multi-stream write API is low-level and potentially error-prone, proposing a significant architectural improvement for this new core feature that would enhance usability and maintainability.


[possible issue] Fix bug with empty event collections

✅ Fix bug with empty event collections

Add a guard clause to handle empty event collections to prevent a potential crash, and refactor the event processing loop for improved readability.

src/KurrentDB.SecondaryIndexing/Indexes/User/Management/UserIndexEventStore.cs [37-94]

 	public async Task<AppendEventsResult> AppendEvents(
 		StreamName stream,
 		ExpectedStreamVersion expectedVersion,
 		IReadOnlyCollection<NewStreamEvent> events,
 		CancellationToken cancellationToken) {
+
+		if (events.Count == 0) {
+			var streamRevision = await _inner.GetStreamVersion(stream, cancellationToken);
+			return AppendEventsResult.StreamExists(streamRevision.Value);
+		}
 
 		// decide if we are going to duplicate the events to the user index all Stream
 		var duplicate = stream.ToString().StartsWith(UserIndexConstants.Category);
 
 		LowAllocReadOnlyMemory<string> streams = duplicate
 			? [stream, UserIndexConstants.ManagementAllStream]
 			: new(stream);
 
 		LowAllocReadOnlyMemory<long> expectedVersions = duplicate
 			? [expectedVersion.Value, ExpectedVersion.Any]
 			: new(expectedVersion.Value);
 
 		var totalEventCount = duplicate ? events.Count * 2 : events.Count;
-		Event[] processedEvents = new Event[totalEventCount];
-		int[] eventStreamIndexes = new int[totalEventCount];
+		var processedEvents = new Event[totalEventCount];
+		var eventStreamIndexes = new int[totalEventCount];
 
 		var originalStreamIndex = 0;
 		var allStreamIndex = 1;
 
 		int i = 0;
-		int j = events.Count;
 		foreach (var evt in events) {
 			var message = Message.Builder
 				.Value(evt.Payload!)
 				.Headers(new Headers(evt.Metadata.ToHeaders()))
 				.WithSchemaType(SchemaDataFormat.Json)
 				.Create();
 			var data = await _serialize(evt.Payload, message.Headers);
 			var dataArray = data.ToArray();
 			var schema = SchemaInfo.FromHeaders(message.Headers);
 			var isJson = schema.SchemaDataFormat == SchemaDataFormat.Json;
 
 			// process the events into the original stream
 			processedEvents[i] = new Event(
 				eventId: evt.Id,
 				eventType: schema.SchemaName,
 				isJson: isJson,
 				data: dataArray);
 			eventStreamIndexes[i] = originalStreamIndex;
 
 			// process the events into the management stream if necessary
 			if (duplicate) {
-				processedEvents[j] = new Event(
+				var duplicateIndex = i + events.Count;
+				processedEvents[duplicateIndex] = new Event(
 					eventId: Guid.NewGuid(),
 					eventType: schema.SchemaName,
 					isJson: isJson,
 					data: dataArray);
-				eventStreamIndexes[j] = allStreamIndex;
+				eventStreamIndexes[duplicateIndex] = allStreamIndex;
 			}
 
 			i++;
-			j++;
 		}
 ...

Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies a potential bug when handling an empty events collection and proposes a valid refactoring to simplify the loop logic, but the provided improved_code for the bug fix is incorrect as it calls a non-existent method _inner.GetStreamVersion.


[general] Fix typo in error message string

✅ Fix typo in error message string

Fix a typo in the SecondaryIndexingDisabled error message by adding a missing closing parenthesis.

src/KurrentDB.Api.V2/Modules/Indexes/ApiErrors.cs [32-34]

 	public static RpcException SecondaryIndexingDisabled() => RpcExceptions.FromError(
 		error: IndexesError.SecondaryIndexingDisabled,
-		message: "Secondary indexing is disabled (configuration key KurrentDB::SecondaryIndexing::Enabled is false");
+		message: "Secondary indexing is disabled (configuration key KurrentDB::SecondaryIndexing::Enabled is false)");

Suggestion importance[1-10]: 2

__

Why: The suggestion correctly identifies and fixes a typo (a missing parenthesis) in an error message string, which is a minor but valid improvement for message clarity.



                     PR 5432 (2026-01-05)                    
[general] Fix typo in error text

✅ Fix typo in error text

Correct the grammatical error "is has" to "has" in the error message string.

src/KurrentDB.Api.V2/Modules/Streams/ApiErrors.cs [75-76]

-var message = $"Stream '{stream}' is has a different group of messages in this session. " +
+var message = $"Stream '{stream}' has a different group of messages in this session. " +
                 $"Appends for the same stream must currently be grouped together and not interleaved with appends for other streams.";

Suggestion importance[1-10]: 3

__

Why: The suggestion correctly identifies and fixes a grammatical error ("is has") in the new error message, improving its clarity and professionalism.



                     PR 5430 (2026-01-02)                    
[high-level] Refactor logging dependency injection strategy

Refactor logging dependency injection strategy


Refactor the code to inject specific ILogger instances directly into classes instead of injecting ILoggerFactory. This change simplifies constructors and improves testability.

Examples:

src/KurrentDB.SecondaryIndexing/Indexes/User/UserIndexEngine.cs [39-54]

	public UserIndexEngine(
		ISystemClient client,
		IPublisher publisher,
		ISubscriber subscriber,
		ISchemaSerializer serializer,
		SecondaryIndexingPluginOptions options,
		DuckDBConnectionPool db,
		IReadIndex index,
		TFChunkDbConfig chunkDbConfig,
		[FromKeyedServices(SecondaryIndexingConstants.InjectionKey)]

 ... (clipped 6 lines)

src/KurrentDB.SecondaryIndexing/Indexes/User/UserIndexEngineSubscription.cs [26-41]

public partial class UserIndexEngineSubscription(
	ISystemClient client,
	IPublisher publisher,
	ISchemaSerializer serializer,
	SecondaryIndexingPluginOptions options,
	DuckDBConnectionPool db,
	IReadIndex readIndex,
	Meter meter,
	Func<(long, DateTime)> getLastAppendedRecord,
	ILoggerFactory logFactory,

 ... (clipped 6 lines)

Solution Walkthrough:

Before:

public class UserIndexEngine(ILoggerFactory loggerFactory) {
    private readonly ILogger _log;
    private readonly UserIndexEngineSubscription _subscription;

    public UserIndexEngine(...) {
        _log = loggerFactory.CreateLogger();
        _subscription = new UserIndexEngineSubscription(..., loggerFactory, ...);
    }
    // ...
}

public class UserIndexEngineSubscription(..., ILoggerFactory logFactory, ...) {
    private readonly ILogger _log = logFactory.CreateLogger();
    // ...
}

After:

// Assuming UserIndexEngineSubscription is registered in DI container
public class UserIndexEngine(ILogger log, UserIndexEngineSubscription subscription) {
    private readonly ILogger _log = log;
    private readonly UserIndexEngineSubscription _subscription = subscription;

    // ...
}

public class UserIndexEngineSubscription(..., ILogger log, ...) {
    private readonly ILogger _log = log;
    // ...
}

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a suboptimal dependency injection pattern where ILoggerFactory is used instead of ILogger, impacting multiple new classes and affecting design quality and testability.


[general] Permit uppercase identifiers

✅ Permit uppercase identifiers

Add RegexOptions.IgnoreCase to the GeneratedRegex attribute to allow uppercase characters in index and column identifiers.

src/KurrentDB.SecondaryIndexing/Indexes/User/UserIndexSql.cs [40-41]

-[GeneratedRegex("^[a-z][a-z0-9_-]*$", RegexOptions.Compiled)]
+[GeneratedRegex("^[a-z][a-z0-9_-]*$", RegexOptions.Compiled | RegexOptions.IgnoreCase)]
 private static partial Regex ValidationRegex();

Suggestion importance[1-10]: 4

__

Why: This is a reasonable suggestion to make the identifier validation case-insensitive, which could improve flexibility, but it's a minor enhancement and not a bug fix.



                     PR 5425 (2025-12-22)                    
[possible issue] Handle null selector result correctly

✅ Handle null selector result correctly

Modify the CanHandleEvent method to treat a null or undefined result from a selector expression as a signal to skip indexing the event, similar to the skip value.

src/KurrentDB.SecondaryIndexing/Indexes/User/UserIndexProcessor.cs [163-172]

 			var fieldValue = _evaluator.Select(_fieldSelectorExpression);
-			if (fieldValue == JsValue.Null)
-				return true;
+			if (fieldValue.IsNull() || fieldValue.IsUndefined())
+				return _fieldSelectorExpression is null;
 
 			if (_skip.Equals(fieldValue))
 				return false;
 
 			field = (TField)TField.ParseFrom(fieldValue);
 
 			return true;

Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies a logical flaw where an event with a null selector result is indexed. The proposed change to skip such events makes the indexing behavior more intuitive and robust.



                     PR 5422 (2025-12-18)                    
[high-level] Refactor JavaScript object creation

✅ Refactor JavaScript object creation

Instead of manually building a JavaScript object using a hierarchy of C# classes that inherit from ObjectInstance, define a simple C# record or class that matches the desired JavaScript structure. Then, use Jint's JsValue.FromObject to automatically convert the C# object to a JsValue, simplifying the code and improving maintainability.

Examples:

src/KurrentDB.SecondaryIndexing/Indexes/User/JavaScript/RecordObject.cs [12-77]

internal sealed class RecordObject : JsObject {
	private readonly PositionObject _position;
	private readonly SchemaInfoObject _schemaInfo;

	public RecordObject(Engine engine, JsonParser parser) : base(engine, parser) {
		_position = new PositionObject(engine, parser);
		SetReadOnlyProperty("position", _position);

		_schemaInfo = new SchemaInfoObject(engine, parser);
		SetReadOnlyProperty("schemaInfo", _schemaInfo);

 ... (clipped 56 lines)

src/KurrentDB.SecondaryIndexing/Indexes/User/UserIndexProcessor.cs [157-166]

			_jsRecord.MapFrom(resolvedEvent, ++_sequenceId);

			if (_filter is not null) {
				var passesFilter = _filter.Call(_jsRecord).AsBoolean();
				if (!passesFilter)
					return false;
			}

			if (_fieldSelector is not null) {
				var fieldJsValue = _fieldSelector.Call(_jsRecord);

Solution Walkthrough:

Before:

// KurrentDB.SecondaryIndexing/Indexes/User/UserIndexProcessor.cs
class UserIndexProcessor {
    private readonly RecordObject _jsRecord;

    public UserIndexProcessor(...) {
        _jsRecord = new RecordObject(_engine, parser);
    }

    private bool CanHandleEvent(ResolvedEvent resolvedEvent, out TField? field) {
        _jsRecord.MapFrom(resolvedEvent, ++_sequenceId);
        var passesFilter = _filter.Call(_jsRecord).AsBoolean();
        // ...
    }
}

// KurrentDB.SecondaryIndexing/Indexes/User/JavaScript/RecordObject.cs
class RecordObject : JsObject { // Inherits from Jint's ObjectInstance
    public RecordObject(...) {
        _position = new PositionObject(engine, parser);
        SetReadOnlyProperty("position", _position); // Manual property setting
        // ...
    }

    public void MapFrom(ResolvedEvent resolvedEvent, ulong sequenceId) {
        RecordId = $"{resolvedEvent.OriginalEvent.EventId}"; // Manual mapping
        // ...
        _position.MapFrom(resolvedEvent, sequenceId);
    }
}

After:

// Define simple C# records to represent the JS object structure
record JsApiPosition(string streamId, ulong streamRevision, ulong logPosition);
record JsApiSchemaInfo(string subject, string type);
record JsApiRecord(
    string recordId,
    JsApiPosition position,
    JsApiSchemaInfo schemaInfo,
    object value, // Can be lazily evaluated
    object headers
    // ... other properties
);

// KurrentDB.SecondaryIndexing/Indexes/User/UserIndexProcessor.cs
class UserIndexProcessor {
    private bool CanHandleEvent(ResolvedEvent resolvedEvent, out TField? field) {
        // Create a simple C# object
        var recordPoco = MapToPoco(resolvedEvent, ++_sequenceId);

        // Let Jint handle the conversion to a JS object
        var jsRecord = Jint.Native.JsValue.FromObject(_engine, recordPoco);

        var passesFilter = _filter.Call(jsRecord).AsBoolean();
        // ...
    }
}

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a complex, custom implementation for creating JavaScript objects and proposes a much simpler, more idiomatic approach using Jint's built-in object mapping, which would significantly reduce code complexity and improve maintainability.



                     PR 5421 (2025-12-18)                    
[general] Reintroduce lazy initialization for connection pools

✅ Reintroduce lazy initialization for connection pools

Reintroduce lazy initialization for the DuckDBConnectionPool to avoid creating it for every connection, restoring the previous performance-optimized behavior.

src/KurrentDB/KestrelHelpers.cs [34-41]

 	public static void UseDuckDbConnectionPoolPerConnection(this ListenOptions listenOptions) {
 		listenOptions.Use(next => async connectionContext => {
 			var poolFactory = listenOptions.ApplicationServices.GetRequiredService<DuckDBConnectionPoolLifetime>();
-			using var pool = poolFactory.CreatePool();
-			connectionContext.Items[nameof(DuckDBConnectionPool)] = pool;
-			await next(connectionContext);
+			var lazyPool = new Lazy<DuckDBConnectionPool>(poolFactory.CreatePool);
+			connectionContext.Items[nameof(Lazy<DuckDBConnectionPool>)] = lazyPool;
+
+			try {
+				await next(connectionContext);
+			} finally {
+				if (lazyPool.IsValueCreated) {
+					lazyPool.Value.Dispose();
+				}
+			}
 		});
 	}

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a performance regression where a connection pool is created eagerly for every connection and proposes reintroducing lazy initialization, which is a significant performance improvement.


[general] Update extension method for lazy pool

✅ Update extension method for lazy pool

Update the GetDuckDbConnectionPool extension method to retrieve the Lazy and return its Value, aligning with the lazy initialization pattern.

src/KurrentDB.Core/Services/Transport/Http/HttpContextExtensions.cs [16-25]

 	[CanBeNull]
 	public static DuckDBConnectionPool GetDuckDbConnectionPool(this HttpContext httpContext) {
 		var connectionItemsFeature = httpContext.Features.Get<IConnectionItemsFeature>();
 
 		if (connectionItemsFeature is null ||
-			!connectionItemsFeature.Items.TryGetValue(nameof(DuckDBConnectionPool), out var item))
+		    !connectionItemsFeature.Items.TryGetValue(nameof(Lazy<DuckDBConnectionPool>), out var item) ||
+		    item is not Lazy<DuckDBConnectionPool> lazyPool)
 			return null;
 
-		return item as DuckDBConnectionPool;
+		return lazyPool.Value;
 	}

Suggestion importance[1-10]: 8

__

Why: This suggestion is a necessary follow-up to reintroducing lazy initialization for the connection pool, ensuring the retrieval logic correctly handles the Lazy wrapper to access the pool instance.



                     PR 5409 (2025-12-16)                    
[possible issue] Fix incorrect configuration option skipping

✅ Fix incorrect configuration option skipping

Fix a bug by removing an else block that incorrectly skips processing scalar configuration options if their top-level value is missing in a provider.

src/KurrentDB.Core/Configuration/ClusterVNodeOptions.Framework.cs [104-125]

 				if (!provider.TryGet(option.Value.Key, out var value) && !isDefault) {
 					// Handle options that have been configured as arrays (GossipSeed is currently the only one
 					// where this is possible)
 					if (option.Value.OptionSchema.Value<string>("type") is "array") {
 						var parentPath = option.Value.Key;
 						var childValues = new List<string>();
 
 						foreach (var childKey in provider.GetChildKeys([], parentPath)) {
 							var absoluteChildKey = parentPath + ":" + childKey;
 							if (provider.TryGet(absoluteChildKey, out var childValue) && childValue is not null) {
 								childValues.Add(childValue);
 								sourceDisplayName = GetSourceDisplayName(absoluteChildKey, provider);
 							}
 						}
 
-						value = string.Join(", ", childValues);
 						if (childValues.Count is 0)
 							continue; // no child values. skip
+
+						value = string.Join(", ", childValues);
 					} else {
-						continue; // no value and it is an array so don't check for children. skip.
+						continue; // no value for this option in this provider.
 					}
 				}

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a bug where scalar options would be skipped if their top-level key is missing, preventing them from being loaded from other providers.



                     PR 5405 (2025-12-12)                    
[general] Simplify null or whitespace string handling

✅ Simplify null or whitespace string handling

In TryPrettifyJson, simplify the initial check to return string.Empty for any null or whitespace input string, ensuring consistent behavior.

src/KurrentDB/Components/Query/Query.razor [176-186]

 static string TryPrettifyJson(string json) {
-	if (string.IsNullOrWhiteSpace(json)) return json ?? string.Empty;
+	if (string.IsNullOrWhiteSpace(json)) return string.Empty;
 	try {
 		using var doc = JsonDocument.Parse(json);
 		return JsonSerializer.Serialize(doc.RootElement, Indent);
 	}
 	catch (JsonException) {
 		// Not valid JSON; return original (or decide to throw)
 		return json;
 	}
 }

Suggestion importance[1-10]: 4

__

Why: The suggestion improves code clarity and consistency by simplifying the handling of null or whitespace strings, which is a good practice for maintainability.



                     PR 5391 (2025-12-07)                    
[high-level] System memory calculation logic is flawed

✅ System memory calculation logic is flawed

The current method for calculating available system memory is flawed because it incorrectly uses process-specific memory metrics (Process.WorkingSet64) instead of system-wide ones. It is recommended to either make the memory limit a configurable setting or to rely on DuckDB's own default memory management.

Examples:

src/KurrentDB.Core/DuckDB/DuckDBConnectionPoolLifetime.cs [56-63]

		(double Total, double Used) CalculateRam() {
			var process = Process.GetCurrentProcess();
			var processRam = process.WorkingSet64;
			var totalRam = GC.GetGCMemoryInfo().TotalAvailableMemoryBytes;
			var totalGb = (double)totalRam / 1024 / 1024 / 1024;
			var processGb = (double)processRam / 1024 / 1024 / 1024;
			return (totalGb, processGb);
		}

Solution Walkthrough:

Before:

(double Total, double Used) CalculateRam() {
    var process = Process.GetCurrentProcess();
    var processRam = process.WorkingSet64; // This is only this process's memory
    var totalRam = GC.GetGCMemoryInfo().TotalAvailableMemoryBytes;
    // ...
    return (totalGb, processGb);
}

// in constructor...
var (total, used) = CalculateRam();
var availableRam = total - used; // Incorrectly calculates "available" RAM
var duckDbRam = (int)Math.Round(availableRam / 2);
_pool = new ConnectionPoolWithFunctions($"Data Source={path};memory_limit={duckDbRam}GB", ...);

After:

// Suggested Alternative: Make the memory limit configurable
// (This requires adding a property to TFChunkDbConfig)

public DuckDBConnectionPoolLifetime(TFChunkDbConfig config, ...) {
    var path = ...;
    var connectionString = $"Data Source={path}";

    // Check for a configured memory limit
    if (config.MemoryLimitGB.HasValue) {
        connectionString += $";memory_limit={config.MemoryLimitGB.Value}GB";
    }
    // If not configured, DuckDB will use its own default (typically 80% of system RAM)

    _pool = new ConnectionPoolWithFunctions(connectionString, ...);
}

Suggestion importance[1-10]: 10

__

Why: The suggestion correctly identifies a critical flaw where process-specific memory (Process.WorkingSet64) is incorrectly used to calculate system-wide available RAM, leading to a dangerously high and incorrect memory limit for DuckDB.



                     PR 5387 (2025-12-01)                    
[possible issue] Fix a potential race condition

✅ Fix a potential race condition

To prevent a potential race condition in the Subscribe method, assign _cts to a local variable at the start and use that variable for the null check and subsequent accesses.

src/KurrentDB.SecondaryIndexing/Subscriptions/SecondaryIndexSubscription.cs [29-50]

 public void Subscribe() {
-	if (_cts == null) {
+	var cts = _cts;
+	if (cts == null) {
 		log.LogWarning("Subscription already terminated");
 		return;
 	}
 	var position = indexProcessor.GetLastPosition();
 	var startFrom = position == TFPos.Invalid ? Position.Start : Position.FromInt64(position.CommitPosition, position.PreparePosition);
 	log.LogInformation("Starting indexing subscription from {StartFrom}", startFrom);
 
 	_subscription = new(
 		bus: publisher,
 		expiryStrategy: DefaultExpiryStrategy.Instance,
 		checkpoint: startFrom,
 		resolveLinks: false,
 		user: SystemAccounts.System,
 		requiresLeader: false,
 		catchUpBufferSize: options.CommitBatchSize * 2,
-		cancellationToken: _cts!.Token
+		cancellationToken: cts.Token
 	);
 
-	_processingTask = ProcessEvents(_cts.Token);
+	_processingTask = ProcessEvents(cts.Token);
 }

Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies a potential race condition where _cts could be nullified after the check, and proposes the standard thread-safe pattern to fix it.



                     PR 5373 (2025-11-19)                    
[general] Improve exception message with context

✅ Improve exception message with context

Improve the exception message when an index is not found by including the indexName to provide better context for debugging.

src/KurrentDB/Components/Query/QueryService.cs [36-43]

 case "index":
 	var indexName = tokens[1];
 	var exists = tryGetUserIndexTableDetails(indexName, out var tableName, out var tableFunctionName, out var hasFields);
 	if (!exists)
-		throw new("Index does not exist");
+		throw new($"Index '{indexName}' does not exist.");
 
 	cte = string.Format(UserIndexCteTemplate, cteName, $"\"{tableName}\"", $"\"{tableFunctionName}\"", hasFields ? ", field" : string.Empty);
 	break;

Suggestion importance[1-10]: 4

__

Why: The suggestion improves the exception message by including the indexName, which enhances debuggability, but it is a minor quality improvement rather than a functional change.