Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/.vuepress/configs/navbar.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ export const navbarEn: NavbarOptions = [
{
text: "Clients",
children: [
{ text: "KurrentDB clients", link: "/clients/grpc/getting-started" },
{ text: "KurrentDB clients", link: "/clients/grpc/" },
],
},
{ text: "HTTP API", children: ver.linksFor("http-api", false) },
Expand Down
47 changes: 38 additions & 9 deletions docs/.vuepress/lib/samples.ts
Original file line number Diff line number Diff line change
@@ -1,27 +1,38 @@
import {logger, path} from 'vuepress/utils';
import {type ResolvedImport} from "../markdown/xode/types";
import version from "./version";
import * as fs from 'fs';

const base = "../../samples";

export function resolveMultiSamplesPath(src: string): ResolvedImport[] {
const split = src.split(':');
const cat = split.length < 2 ? undefined : split[0];
const paths = split.length === 1 ? src : split[1];
return paths.split(';').map(x => {
const r = resolveSamplesPath(x, cat);
return {label: r.label, importPath: r.path};
})
return paths.split(';')
.filter(x => x.trim() !== '') // Filter out empty strings
.map(x => {
const r = resolveSamplesPath(x, cat);
return {label: r.label, importPath: r.path};
})
}

export function resolveSamplesPath(src: string, srcCat: string | undefined) {
const def = (s: string) => {
return {label: "", path: s}
};

const ext = src.split('.').pop()!;
// Handle empty src
if (!src || src.trim() === '') {
console.warn(`Empty source path provided, srcCat: "${srcCat}"`);
return def(src);
}

const srcParts = src.split('.');
const ext = srcParts.length > 1 ? srcParts.pop()! : '';
const pseudo = src.split('/');
const includesCat = pseudo[0].startsWith('@');

if (!includesCat && srcCat === undefined) return def(src);

const cats: Record<string, Record<string, {path: string, version?: string, label?: string}>> = {
Expand Down Expand Up @@ -78,18 +89,36 @@ export function resolveSamplesPath(src: string, srcCat: string | undefined) {
}

let lang = cat[ext] ?? cat["default"];
if (lang === undefined && cat.path === undefined) {
logger.warn(`Unknown extension ${ext} in ${cat}`);
if (lang === undefined) {
// If no extension match and no default, try to find by partial match or return default
logger.warn(`Unknown extension "${ext}" in category "${catName}". Available extensions: ${Object.keys(cat).join(', ')}`);
return def(src);
}

// If we don't have an extension but we have a default, use it
if (ext === '' && cat["default"]) {
lang = cat["default"];
}

const samplesVersion = isVersion ? pseudo[1] : lang.version;
const langPath = samplesVersion !== undefined ? `${lang.path}/${samplesVersion}` : lang.path;
const toReplace = isVersion ? `${pseudo[0]}/${pseudo[1]}` : `${pseudo[0]}`;

const p = includesCat ? src.replace(toReplace, `${base}/${langPath}`) : `${base}/${langPath}/${src}`;
const resolvedPath = path.resolve(__dirname, p);

// Check if the resolved path is a directory, and if so, warn and return the original src
try {
const stat = fs.statSync(resolvedPath);
if (stat.isDirectory()) {
logger.warn(`Resolved path is a directory, not a file: ${resolvedPath}`);
return def(src);
}
} catch (error) {
// File doesn't exist, which is handled elsewhere
}

return {label: lang.label, path: path.resolve(__dirname, p)};
return {label: lang.label, path: resolvedPath};
}

export const projectionSamplesPath = "https://raw.githubusercontent.com/kurrent-io/KurrentDB/53f84e55ea56ccfb981aff0e432581d72c23fbf6/samples/http-api/data/";
export const projectionSamplesPath = "https://raw.githubusercontent.com/kurrent-io/KurrentDB/53f84e55ea56ccfb981aff0e432581d72c23fbf6/samples/http-api/data/";
11 changes: 2 additions & 9 deletions docs/clients/grpc/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,3 @@
---
index: false
breadcrumbExclude: true
---
# KurrentDB Clients

# Clients

Learn how to use the KurrentDB client libraries to interact with the database.

<Catalog/>
Start working with KurrentDB using one of the official clients.
10 changes: 10 additions & 0 deletions docs/clients/grpc/c#/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
index: false
breadcrumbExclude: true
---

# C#

Learn how to use the KurrentDB C# client library to interact with the database.

<Catalog/>
83 changes: 83 additions & 0 deletions docs/clients/grpc/c#/appending-events.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
---
order: 2
---

# Appending events

When you start working with KurrentDB, it is empty. The first meaningful operation is to add one or more events to the database using one of the available client SDKs.

::: tip
Check the [Getting Started](getting-started.md) guide to learn how to configure and use the client SDK.
:::

## Append your first event

The simplest way to append an event to KurrentDB is to create an `EventData` object and call `AppendToStream` method.

@[code{append-to-stream}](@grpc:appending-events/Program.cs)

`AppendToStream` takes a collection of `EventData`, which allows you to save more than one event in a single batch.

Outside the example above, other options exist for dealing with different scenarios.

::: tip
If you are new to Event Sourcing, please study the [Handling concurrency](#handling-concurrency) section below.
:::

## Working with EventData

Events appended to KurrentDB must be wrapped in an `EventData` object. This allows you to specify the event's content, the type of event, and whether it's in JSON format. In its simplest form, you need three arguments: **eventId**, **type**, and **data**.

### eventId

This takes the format of a `Uuid` and is used to uniquely identify the event you are trying to append. If two events with the same `Uuid` are appended to the same stream in quick succession, KurrentDB will only append one of the events to the stream.

For example, the following code will only append a single event:

@[code{append-duplicate-event}](@grpc:appending-events/Program.cs)

![Duplicate Event](../images/duplicate-event.png)

### type

Each event should be supplied with an event type. This unique string is used to identify the type of event you are saving.

It is common to see the explicit event code type name used as the type as it makes serialising and de-serialising of the event easy. However, we recommend against this as it couples the storage to the type and will make it more difficult if you need to version the event at a later date.

### data

Representation of your event data. It is recommended that you store your events as JSON objects. This allows you to take advantage of all of KurrentDB's functionality, such as projections. That said, you can save events using whatever format suits your workflow. Eventually, the data will be stored as encoded bytes.

### metadata

Storing additional information alongside your event that is part of the event itself is standard practice. This can be correlation IDs, timestamps, access information, etc. KurrentDB allows you to store a separate byte array containing this information to keep it separate.

### isJson

Simple boolean field to tell KurrentDB if the event is stored as json, true by default.

## Handling concurrency

When appending events to a stream, you can supply a *stream state* or *stream revision*. Your client uses this to inform KurrentDB of the state or version you expect the stream to be in when appending an event. If the stream isn't in that state, an exception will be thrown.

For example, if you try to append the same record twice, expecting both times that the stream doesn't exist, you will get an exception on the second:

@[code{append-with-no-stream}](@grpc:appending-events/Program.cs)

There are three available stream states:
- `Any`
- `NoStream`
- `StreamExists`

This check can be used to implement optimistic concurrency. When retrieving a stream from KurrentDB, note the current version number. When you save it back, you can determine if somebody else has modified the record in the meantime.

@[code{append-with-concurrency-check}](@grpc:appending-events/Program.cs)

<!-- ## Options TODO -->

## User credentials

You can provide user credentials to append the data as follows. This will override the default credentials set on the connection.

@[code{overriding-user-credentials}](@grpc:appending-events/Program.cs)

32 changes: 32 additions & 0 deletions docs/clients/grpc/c#/delete-stream.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
order: 9
---

# Deleting events

In KurrentDB, you can delete events and streams either partially or completely. Settings like $maxAge and $maxCount help control how long events are kept or how many events are stored in a stream, but they won't delete the entire stream.
When you need to fully remove a stream, KurrentDB offers two options: Soft Delete and Hard Delete.

## Soft delete

Soft delete in KurrentDB allows you to mark a stream for deletion without completely removing it, so you can still add new events later. While you can do this through the UI, using code is often better for automating the process,
handling many streams at once, or including custom rules. Code is especially helpful for large-scale deletions or when you need to integrate soft deletes into other workflows.

```csharp
await client.DeleteAsync(streamName, StreamState.Any);
```

::: note
Clicking the delete button in the UI performs a soft delete,
setting the TruncateBefore value to remove all events up to a certain point.
While this marks the events for deletion, actual removal occurs during the next scavenging process.
The stream can still be reopened by appending new events.
:::

## Hard delete

Hard delete in KurrentDB permanently removes a stream and its events. While you can use the HTTP API, code is often better for automating the process, managing multiple streams, and ensuring precise control. Code is especially useful when you need to integrate hard delete into larger workflows or apply specific conditions. Note that when a stream is hard deleted, you cannot reuse the stream name, it will raise an exception if you try to append to it again.

```csharp
await client.TombstoneAsync(streamName, StreamState.Any);
```
103 changes: 103 additions & 0 deletions docs/clients/grpc/c#/getting-started.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
order: 1
---

# Getting started

Get started by connecting your application to KurrentDB.

## Connecting to KurrentDB

To connect your application to KurrentDB, instantiate and configure the client.

::: tip Insecure clusters
All our GRPC clients are secure by default and must be configured to connect to an insecure server via [a connection string](#connection-string) or the client's configuration.
:::

### Required packages

Add the .NET `KurrentDB.Client` package to your project:

```bash
dotnet add package KurrentDB.Client
```


### Connection string

Each SDK has its own way of configuring the client, but the connection string can always be used.
The KurrentDB connection string supports two schemas: `kurrentdb://` for connecting to a single-node server, and `kurrentdb+discover://` for connecting to a multi-node cluster. The difference between the two schemas is that when using `kurrentdb://`, the client will connect directly to the node; with `kurrentdb+discover://` schema the client will use the gossip protocol to retrieve the cluster information and choose the right node to connect to.
Since version 22.10, ESDB supports gossip on single-node deployments, so `kurrentdb+discover://` schema can be used for connecting to any topology.

The connection string has the following format:

```
kurrentdb+discover://admin:changeit@cluster.dns.name:2113
```

There, `cluster.dns.name` is the name of a DNS `A` record that points to all the cluster nodes. Alternatively, you can list cluster nodes separated by comma instead of the cluster DNS name:

```
kurrentdb+discover://admin:changeit@node1.dns.name:2113,node2.dns.name:2113,node3.dns.name:2113
```

There are a number of query parameters that can be used in the connection string to instruct the cluster how and where the connection should be established. All query parameters are optional.

| Parameter | Accepted values | Default | Description |
|-----------------------|---------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------|
| `tls` | `true`, `false` | `true` | Use secure connection, set to `false` when connecting to a non-secure server or cluster. |
| `connectionName` | Any string | None | Connection name |
| `maxDiscoverAttempts` | Number | `10` | Number of attempts to discover the cluster. |
| `discoveryInterval` | Number | `100` | Cluster discovery polling interval in milliseconds. |
| `gossipTimeout` | Number | `5` | Gossip timeout in seconds, when the gossip call times out, it will be retried. |
| `nodePreference` | `leader`, `follower`, `random`, `readOnlyReplica` | `leader` | Preferred node role. When creating a client for write operations, always use `leader`. |
| `tlsVerifyCert` | `true`, `false` | `true` | In secure mode, set to `true` when using an untrusted connection to the node if you don't have the CA file available. Don't use in production. |
| `tlsCaFile` | String, file path | None | Path to the CA file when connecting to a secure cluster with a certificate that's not signed by a trusted CA. |
| `defaultDeadline` | Number | None | Default timeout for client operations, in milliseconds. Most clients allow overriding the deadline per operation. |
| `keepAliveInterval` | Number | `10` | Interval between keep-alive ping calls, in seconds. |
| `keepAliveTimeout` | Number | `10` | Keep-alive ping call timeout, in seconds. |
| `userCertFile` | String, file path | None | User certificate file for X.509 authentication. |
| `userKeyFile` | String, file path | None | Key file for the user certificate used for X.509 authentication. |

When connecting to an insecure instance, specify `tls=false` parameter. For example, for a node running locally use `kurrentdb://localhost:2113?tls=false`. Note that usernames and passwords aren't provided there because insecure deployments don't support authentication and authorisation.

### Creating a client

First, create a client and get it connected to the database.

@[code{createClient}](@grpc:quick-start/Program.cs)

The client instance can be used as a singleton across the whole application. It doesn't need to open or close the connection.

### Creating an event

You can write anything to KurrentDB as events. The client needs a byte array as the event payload. Normally, you'd use a serialized object, and it's up to you to choose the serialization method.

::: tip Server-side projections
User-defined server-side projections require events to be serialized in JSON format.

We use JSON for serialization in the documentation examples.
:::

The code snippet below creates an event object instance, serializes it, and adds it as a payload to the `EventData` structure, which the client can then write to the database.

@[code{createEvent}](@grpc:quick-start/Program.cs)

### Appending events

Each event in the database has its own unique identifier (UUID). The database uses it to ensure idempotent writes, but it only works if you specify the stream revision when appending events to the stream.

In the snippet below, we append the event to the stream `some-stream`.

@[code{appendEvents}](@grpc:quick-start/Program.cs)

Here we are appending events without checking if the stream exists or if the stream version matches the expected event version. See more advanced scenarios in [appending events documentation](./appending-events.md).

### Reading events

Finally, we can read events back from the `some-stream` stream.

@[code{readStream}](@grpc:quick-start/Program.cs)

When you read events from the stream, you get a collection of `ResolvedEvent` structures. The event payload is returned as a byte array and needs to be deserialized. See more advanced scenarios in [reading events documentation](./reading-events.md).

Loading