Catmint is a TypeScript library distributed under the @catmint-fs scope:
| Package | Purpose |
|---|---|
@catmint-fs/core |
Virtual filesystem layer over a backing filesystem |
@catmint-fs/sqlite-adapter |
SQLite-backed FsAdapter for @catmint-fs/core |
@catmint-fs/git |
Programmatic git operations built on @catmint-fs/core layers |
This document covers @catmint-fs/core, @catmint-fs/sqlite-adapter, and @catmint-fs/git.
@catmint-fs/core and @catmint-fs/git are designed to run in both Node.js and browser environments. The public API avoids Node.js-only APIs:
Uint8Arrayis used instead ofBufferfor all binary data in the public API and adapter interface. In Node.js,Buffer(which extendsUint8Array) can be passed anywhereUint8Arrayis expected.- Custom
StatResultandDirentEntrytypes are used instead offs.Statsandfs.Dirent. These are plain objects with the same fields and helper methods, not tied to the Node.jsfsmodule. ReadableStream(Web Streams API) is used forcreateReadStreaminstead of Node.jsReadable. Both Node.js (18+) and browsers supportReadableStreamnatively.fetch()(Web API) is used for the built-in HTTP transport. No dependency on Node.jshttp/https.- No dependency on Node.js
fs,path,crypto, orzlibmodules in the core or git packages themselves. TheLocalAdapter(which wraps Node.jsfs) is the only component that requires Node.js — it is a concrete adapter, not part of the portable API surface.
This means @catmint-fs/git works in the browser when paired with a browser-compatible adapter (e.g. a hypothetical catmint-fs-indexeddb package) — enabling use cases like in-browser git clients, web-based IDEs, and offline-capable dev tools.
The @catmint-fs/sqlite-adapter targets Node.js only (native SQLite bindings). The LocalAdapter in @catmint-fs/core also targets Node.js only. These are concrete adapter implementations, not part of the portable API.
| Component | Node.js | Browser |
|---|---|---|
@catmint-fs/core (layer API + FsAdapter interface) |
Yes | Yes |
@catmint-fs/core (LocalAdapter) |
Yes | No — wraps Node.js fs |
@catmint-fs/sqlite-adapter |
Yes | No — native SQLite bindings |
@catmint-fs/git |
Yes | Yes |
@catmint-fs/git (HTTP transport) |
Yes | Yes — uses fetch() |
@catmint-fs/git (SSH transport) |
Third-party | No — inherently server-side |
Performing filesystem operations directly against the host is destructive and immediate. There is no built-in way to stage a set of file changes, preview them, and then atomically apply or discard them. This makes it difficult to build tools that need to:
- Generate files speculatively and only commit the results on success.
- Preview a set of mutations before writing them to disk.
- Run transformations in a sandboxed context without risk to the host.
- Compose multiple file operations into a single transaction-like unit.
@catmint-fs/core provides a copy-on-write virtual filesystem layer that sits on top of a backing filesystem. The backing filesystem is accessed through an adapter — a pluggable interface that abstracts the underlying storage. A built-in LocalAdapter wraps the host filesystem via Node.js fs, but any storage backend (S3, SFTP, WebDAV, IndexedDB, in-memory, etc.) can be supported by implementing the adapter interface. The core layer API and adapter interface use only platform-neutral types (Uint8Array, ReadableStream, plain objects) so they work in both Node.js and browser environments.
The user creates a virtual layer, performs arbitrary read/write/delete operations against it, and then either applies the changes to the backing filesystem or disposes of them.
All operations go through the virtual layer first. Reads fall through to the backing adapter when no virtual override exists. Writes, renames, and deletes are captured in memory (or a temp-backed store) and never touch the backing filesystem until explicitly applied.
- Provide a programmatic TypeScript API to declare and operate on a virtual filesystem layer.
- Reads should transparently resolve from the virtual layer, falling back to the backing adapter.
- Writes, deletes, and renames are captured in the virtual layer without modifying the backing filesystem.
- The backing filesystem's permission system is respected — operations that would fail due to permissions should fail in the virtual layer too.
- Accumulated changes can be atomically applied to the backing filesystem.
- Accumulated changes can be disposed of with no side effects.
- The API surface should feel familiar to anyone who has used Node's
fsmodule. - The backing filesystem is pluggable via an adapter interface, enabling support for remote and custom storage backends.
- The core layer API, adapter interface, and git package must be browser-compatible — no Node.js-only APIs in the public surface.
- Implementing a full POSIX filesystem (fifos, sockets, etc.).
- Providing a FUSE mount or kernel-level virtual filesystem.
- Shipping remote adapter implementations in
@catmint-fs/coreitself (the core package provides the interface andLocalAdapter; other adapters are separate packages). - Shipping adapters beyond
LocalAdapterandSqliteAdapterunder the@catmint-fsscope (community adapters like S3, SFTP, etc. are third-party). - Providing
appendFile,copyFile,link(hard links),utimes, or other less-commonfsoperations. These can be added in future versions if needed. - The
@catmint-fs/gitpackage's non-goals (covered in its own section).
An FsAdapter is the interface between the layer and the underlying storage backend. It defines the low-level operations (read, write, delete, stat, readdir, symlink, chmod, chown, etc.) that the layer delegates to when it needs to interact with the backing filesystem. @catmint-fs/core ships with a LocalAdapter that wraps Node.js fs. Third-party adapters can implement the same interface to support remote or custom backends.
A Layer is the central abstraction. It represents a virtual filesystem overlay backed by an adapter. All operations performed through a layer are isolated from the backing filesystem until explicitly applied.
Internally, a layer maintains a ledger of changes (creates, updates, deletes, renames, symlink operations, permission/ownership changes). This ledger is the source of truth for what the virtual state looks like relative to the backing filesystem's baseline.
When reading a file or listing a directory that has not been modified in the virtual layer, the layer delegates to the backing adapter. This means the virtual layer always reflects the backing filesystem's state plus any local overrides.
Before capturing a virtual write or delete, the layer asks the adapter to check permissions for the target path. If the operation would not be permitted on the backing filesystem, the virtual operation is rejected with a permission error. This ensures that applying changes later will not fail due to permissions that were not validated upfront. Adapters that do not support permissions (e.g. an in-memory adapter) can opt out by always returning success from permission checks.
import { createLayer, LocalAdapter } from "@catmint-fs/core";
// Local filesystem (default) — no adapter needed
const layer = await createLayer({
root: "/path/to/project",
});
// Explicit adapter
const layer = await createLayer({
root: "/path/to/project",
adapter: new LocalAdapter(),
});
// Remote / custom adapter
import { S3Adapter } from "catmint-fs-s3"; // third-party package
const layer = await createLayer({
root: "/bucket/prefix",
adapter: new S3Adapter({ bucket: "my-bucket", region: "us-east-1" }),
});
// SQLite adapter
import { SqliteAdapter } from "@catmint-fs/sqlite-adapter";
const layer = await createLayer({
root: "/",
adapter: new SqliteAdapter({ database: "fs.sqlite" }),
});| Option | Type | Default | Description |
|---|---|---|---|
root |
string |
(required) | Root path for the layer. Must be an absolute path — relative paths are rejected with EINVAL. For LocalAdapter, an absolute host path. For remote adapters, the meaning is adapter-defined (e.g. a bucket prefix). |
adapter |
FsAdapter |
new LocalAdapter() |
The backing filesystem adapter. Defaults to the local filesystem. |
The layer exposes an fs-like interface for common operations. All paths are resolved relative to the layer's root.
// Reading (falls through to host if unmodified)
const content = await layer.readFile("src/index.ts");
const stream = layer.createReadStream("src/large-file.bin");
const entries = await layer.readdir("src");
const info = await layer.stat("src/index.ts");
// stat returns host metadata merged with any virtual overrides
// (e.g. mode from chmod, uid/gid from chown)
const linkInfo = await layer.lstat("src/link.ts");
// lstat returns metadata about the symlink itself, not its target
const target = await layer.readlink("src/link.ts");
// readlink returns the symlink target path
// Writing (captured in virtual layer — accepts string or Uint8Array)
await layer.writeFile("src/new-file.ts", "export default 42;");
await layer.writeFile("src/binary.bin", new Uint8Array([0x00, 0x01]));
await layer.writeFile("src/script.sh", "#!/bin/sh", { mode: 0o755, uid, gid });
await layer.mkdir("src/utils");
await layer.mkdir("src/deep/nested/dir", { recursive: true });
await layer.mkdir("src/private", { mode: 0o700, uid, gid });
// Symlinks (captured in virtual layer)
await layer.symlink("src/index.ts", "src/link.ts");
// Creates a symlink at src/link.ts pointing to src/index.ts
// Deleting (captured in virtual layer)
await layer.rm("src/old-file.ts");
await layer.rm("src/deprecated", { recursive: true }); // removes directory and contents
await layer.rm("src/maybe.ts", { force: true }); // no error if path does not exist
await layer.rmdir("src/empty-dir"); // only removes empty directories
// Renaming / Moving (captured in virtual layer)
await layer.rename("src/old-name.ts", "src/new-name.ts");
// Permissions (captured in virtual layer)
await layer.chmod("src/script.sh", 0o755);
await layer.chown("src/script.sh", uid, gid);
await layer.lchown("src/link.ts", uid, gid); // changes ownership of symlink itself
// Ownership
const owner = await layer.getOwner("src/script.sh");
// owner: { uid: number; gid: number }
// Checking existence
const exists = await layer.exists("src/index.ts");getOwner returns the effective owner — the virtual override if one exists via chown, otherwise the host value. This is a convenience method; the same information is available through stat().
All file operation methods are async and return promises, except createReadStream which returns a ReadableStream synchronously. dispose() and reset() are also synchronous. There are no synchronous variants of the async methods.
Return types match their Node.js fs counterparts (with platform-neutral types for browser compatibility):
| Method | Return type | Node.js equivalent |
|---|---|---|
readFile |
Promise<Uint8Array> |
fs.promises.readFile |
createReadStream |
ReadableStream<Uint8Array> |
fs.createReadStream (Web Streams API equivalent) |
readdir |
Promise<DirentEntry[]> |
fs.promises.readdir with { withFileTypes: true } |
stat |
Promise<StatResult> |
fs.promises.stat |
lstat |
Promise<StatResult> |
fs.promises.lstat |
readlink |
Promise<string> |
fs.promises.readlink |
exists |
Promise<boolean> |
(no direct equivalent — fs.existsSync is sync-only) |
getOwner |
Promise<{ uid: number; gid: number }> |
(convenience — subset of StatResult) |
writeFile |
Promise<void> |
fs.promises.writeFile |
mkdir |
Promise<void> |
fs.promises.mkdir |
rm |
Promise<void> |
fs.promises.rm |
rmdir |
Promise<void> |
fs.promises.rmdir |
rename |
Promise<void> |
fs.promises.rename |
symlink |
Promise<void> |
fs.promises.symlink |
chmod |
Promise<void> |
fs.promises.chmod |
chown |
Promise<void> |
fs.promises.chown |
lchown |
Promise<void> |
fs.promises.lchown |
getChanges |
ChangeEntry[] |
(no equivalent) |
getChangeDetail |
Promise<ChangeDetail | null> |
(no equivalent) |
apply |
Promise<ApplyResult> |
(no equivalent) |
stat and lstat return StatResult objects — plain objects with numeric fields and type-check helper methods (isFile(), isDirectory(), isSymbolicLink()). See Supporting Types for the full definition. In Node.js environments, these are structurally compatible with fs.Stats.
readFilealways returnsPromise<Uint8Array>. Callers that need a string should decode the result (e.g.new TextDecoder().decode(data)in browsers, orBuffer.from(data).toString(encoding)in Node.js). There is no encoding parameter — this keeps the API simple and avoids ambiguity.writeFileacceptsstring | Uint8Arrayat the layer level. When a string is passed, the layer converts it to aUint8Arrayusing UTF-8 encoding before storing. The adapter interface always receivesUint8Array.createReadStreamreturns aReadableStream<Uint8Array>(Web Streams API). For files that exist only in the virtual layer (in memory), the stream is created from the in-memory buffer. This is not a true streaming read from disk in that case, but the interface is consistent.ReadableStreamis supported natively in Node.js 18+ and all modern browsers.
const changes: ChangeEntry[] = layer.getChanges();
// Returns an array of change entries:
// [
// { type: "create", entryType: "file", path: "src/new-file.ts" },
// { type: "create", entryType: "directory", path: "src/utils" },
// { type: "update", path: "src/index.ts" },
// { type: "delete", entryType: "file", path: "src/old-file.ts" },
// { type: "delete", entryType: "symlink", path: "src/old-link.ts" },
// { type: "rename", from: "src/old-name.ts", to: "src/new-name.ts" },
// { type: "chmod", path: "src/script.sh", mode: 0o755 },
// { type: "chown", path: "src/script.sh", uid: 1000, gid: 1000 },
// { type: "symlink", path: "src/link.ts", target: "src/index.ts" },
// ]getChanges() is a lightweight summary — it tells you what changed but not the actual content. To inspect the full content and metadata for a specific change, use getChangeDetail():
const detail = await layer.getChangeDetail("src/new-file.ts");
// detail: ChangeDetail | null (null if the path has no pending change)
//
// For a file create or update:
// {
// type: "create",
// entryType: "file",
// path: "src/new-file.ts",
// content: Uint8Array,
// mode: 0o644,
// uid: 1000,
// gid: 1000,
// }
//
// For a directory create:
// {
// type: "create",
// entryType: "directory",
// path: "src/utils",
// mode: 0o755,
// uid: 1000,
// gid: 1000,
// }
//
// For a delete:
// { type: "delete", entryType: "file", path: "src/old-file.ts" }
//
// For a rename:
// { type: "rename", from: "src/old-name.ts", to: "src/new-name.ts" }
//
// For a symlink:
// { type: "symlink", path: "src/link.ts", target: "src/index.ts" }getChangeDetail returns the virtual content and metadata as they would be applied. For paths with multiple ledger entries (e.g. a file that was created and then chmod'd), the detail reflects the merged state — the content from the write and the mode from the chmod, combined. This avoids callers having to reconstruct the final state from individual ledger entries.
// Best-effort (default) — applies as many changes as possible,
// collects errors, and continues.
const result = await layer.apply();
// result: ApplyResult — { applied: number; errors: ApplyError[] }
// Transactional — all changes succeed or none do.
const result = await layer.apply({ transaction: true });
// On failure, all changes already written to the host during this
// apply call are rolled back and an error is thrown.| Option | Type | Default | Description |
|---|---|---|---|
transaction |
boolean |
false |
When true, apply is atomic — any failure rolls back all changes made during this apply call. |
Applies each change sequentially. If an individual operation fails, the error is recorded in result.errors and the remaining operations continue. The caller inspects errors and decides how to proceed. The layer is reset on completion regardless of errors.
Changes are applied in a deterministic order that respects dependencies:
- Directory creates (shallowest first) — ensures parent directories exist before files are written into them.
- File and symlink creates, and file updates — in ledger insertion order.
- Renames — in ledger insertion order.
- Permission and ownership changes (
chmod,chown,lchown) — in ledger insertion order. - File and symlink deletes — in ledger insertion order.
- Directory deletes (deepest first) — ensures children are removed before parents.
This ordering prevents failures like writing a file before its parent directory exists, or deleting a directory before its contents.
Before writing, the apply process captures the original state (content, metadata, or absence) of every host path that will be touched. Changes are then applied sequentially. If any operation fails:
- All previously applied changes in this call are reverted — created files are deleted, updated files are restored to their prior content, deleted files are restored, renames are reversed.
- The layer's pending changes are preserved (not reset), so the caller can fix the issue and retry.
- A
TransactionErroris thrown containing the root cause and any errors encountered during rollback.
This makes apply({ transaction: true }) an all-or-nothing operation from the host filesystem's perspective.
layer.dispose();Synchronously discards all pending changes and releases any resources held by the layer (in-memory buffers, ledger state). After disposal, calling any method on the layer throws DISPOSED. This does not close the adapter — adapter lifecycle is owned by the caller.
layer.reset();Synchronously discards all pending changes but keeps the layer alive for further use. The layer's connection to the adapter and root configuration are preserved.
| Scenario | Behavior |
|---|---|
| Write to a file that exists on host | Recorded as update. Read returns virtual content. |
| Write to a file that does not exist on host | Recorded as create. Read returns virtual content. |
| Delete a file that exists on host | Recorded as delete. Read throws ENOENT. readdir omits it. |
| Delete a file only in the virtual layer | Change entry removed. Falls back to host state. |
| Read a file not modified in layer | Delegated to host. |
| Read a file modified in layer | Returns virtual content. |
readdir on a directory with mixed changes |
Merges host listing with virtual creates/deletes. |
| Write to a path without host write permission | Throws EACCES immediately (does not defer to apply). |
chmod on a file/directory that exists on host |
Recorded as chmod. stat returns the virtual mode. |
chmod on a file only in the virtual layer |
Updates the virtual metadata. |
chown on a file/directory that exists on host |
Recorded as chown. stat returns the virtual uid/gid. |
chown on a file only in the virtual layer |
Updates the virtual metadata. |
chmod/chown after the target is deleted in the layer |
Throws ENOENT. |
A rename from A to B is recorded as a single rename entry in the ledger, but has implications for both paths:
| Scenario | Behavior |
|---|---|
| Read from the source path after rename | Throws ENOENT. The source is treated as deleted in the virtual layer. |
| Read from the destination path after rename | Returns the file content (the content that was at the source). |
| Write to the source path after rename | Treated as a new create — the rename and the new write are independent changes. |
| Write to the destination path after rename | Overwrites the renamed content. The rename entry is replaced with a delete of the source and a create/update of the destination. |
| Delete the destination after rename | Both the source and destination are effectively deleted. Recorded as a delete of the source (if it existed on host) and the rename is removed. |
| Rename to a path that already exists (host or virtual) | The existing file at the destination is overwritten, matching POSIX rename(2) behaviour. |
readdir on the source's parent directory |
Omits the source file. |
readdir on the destination's parent directory |
Includes the destination file. |
Symlinks are first-class entries in the virtual layer. The layer supports creating, reading, and deleting symlinks, and correctly distinguishes between operations on the symlink itself versus its target.
await layer.symlink(target, path);Creates a virtual symlink at path pointing to target. The target is stored as-is (it may be relative or absolute) and is not validated at creation time — the target does not need to exist. This matches POSIX symlink(2) behaviour.
| Method | Behaviour |
|---|---|
readFile(path) |
If path is a symlink, follows it and reads the target's content. If the target does not exist, throws ENOENT. |
stat(path) |
Follows symlinks. Returns metadata about the target. |
lstat(path) |
Does not follow symlinks. Returns metadata about the symlink itself (type will be symlink). |
readlink(path) |
Returns the symlink's target path. Throws EINVAL if the path is not a symlink. |
exists(path) |
Follows symlinks. Returns true only if the target exists. |
readdir(path) |
Lists symlinks as entries. Does not follow them to list their target's contents. |
| Scenario | Behaviour |
|---|---|
writeFile to a symlink path |
Follows the symlink and writes to the target (the symlink is not replaced). |
rm on a symlink path |
Removes the symlink itself, not the target. |
rename on a symlink path |
Renames the symlink itself, not the target. The symlink's target value is preserved. |
chmod/chown on a symlink path |
Follows the symlink and modifies the target's permissions/ownership (matching default POSIX behaviour). Use lchown to change ownership of the symlink itself. |
symlink to a path that already exists |
Throws EEXIST. |
await layer.lchown("src/link.ts", uid, gid);Changes ownership of the symlink itself, not its target. This is the only l-prefixed mutating operation supported (matching Node.js fs.lchown).
A symlink whose target does not exist (in the virtual layer or on the host) is a dangling symlink. The layer handles these as follows:
| Operation | Behaviour |
|---|---|
lstat |
Succeeds — returns symlink metadata. |
readlink |
Succeeds — returns the target path. |
stat |
Throws ENOENT (target does not exist). |
readFile |
Throws ENOENT (target does not exist). |
exists |
Returns false. |
rm |
Succeeds — removes the dangling symlink. |
The layer follows symlink chains (symlink pointing to another symlink) up to a maximum depth. If the chain exceeds the limit, ELOOP is thrown. The default limit matches the OS limit (typically 40 on Linux, 32 on macOS).
The layer defers to the host filesystem for case sensitivity. It does not impose its own casing rules.
- On case-sensitive filesystems (e.g. Linux ext4),
File.txtandfile.txtare distinct paths. The layer treats them as separate entries. - On case-insensitive filesystems (e.g. macOS APFS default, Windows NTFS),
File.txtandfile.txtrefer to the same file. The layer respects this — a write toFile.txtfollowed by a read offile.txtreturns the written content, because the host considers them the same path.
At layer creation time, @catmint-fs/core detects whether the host filesystem at root is case-sensitive or case-insensitive and normalises internal path lookups accordingly. This ensures that the virtual layer's behaviour is consistent with what apply() will encounter on the host.
| Scenario | Case-sensitive FS | Case-insensitive FS |
|---|---|---|
Write Foo.txt, read foo.txt |
ENOENT (different files) |
Returns written content (same file) |
Write Foo.txt, write foo.txt |
Two separate files in ledger | Second write overwrites the first |
Rename Foo.txt to foo.txt |
Standard rename (two distinct paths) | Case-only rename (updates the directory entry's casing) |
readdir after writing Foo.txt |
Lists Foo.txt |
Lists Foo.txt (preserves the casing used at write time) |
The adapter system decouples the layer from any specific storage backend. Every interaction between the layer and the backing filesystem goes through the FsAdapter interface.
interface FsAdapter {
// Reading
readFile(path: string): Promise<Uint8Array>;
createReadStream(path: string): ReadableStream<Uint8Array>;
readdir(path: string): Promise<DirentEntry[]>;
stat(path: string): Promise<StatResult>;
lstat(path: string): Promise<StatResult>;
readlink(path: string): Promise<string>;
exists(path: string): Promise<boolean>;
// Writing
writeFile(path: string, content: Uint8Array, options?: WriteOptions): Promise<void>;
mkdir(path: string, options?: MkdirOptions): Promise<void>;
// Deleting
rm(path: string, options?: RmOptions): Promise<void>;
rmdir(path: string): Promise<void>;
// Renaming
rename(from: string, to: string): Promise<void>;
// Symlinks
symlink(target: string, path: string): Promise<void>;
// Permissions & ownership
chmod(path: string, mode: number): Promise<void>;
chown(path: string, uid: number, gid: number): Promise<void>;
lchown(path: string, uid: number, gid: number): Promise<void>;
checkPermission(path: string, operation: PermissionOp): Promise<void>;
// Lifecycle
initialize?(root: string): Promise<void>;
// Capabilities
capabilities(): AdapterCapabilities;
}All paths passed to adapter methods are resolved relative to root by the layer before delegation. The adapter receives fully-qualified paths appropriate to its backend.
// Options for writeFile
interface WriteOptions {
mode?: number; // POSIX permission bits (e.g. 0o644). Default: 0o666.
uid?: number; // Owner user ID.
gid?: number; // Owner group ID.
}
// Options for mkdir
interface MkdirOptions {
recursive?: boolean; // Create parent directories as needed. Default: false.
mode?: number; // POSIX permission bits (e.g. 0o755). Default: 0o777.
uid?: number; // Owner user ID.
gid?: number; // Owner group ID.
}
// Options for rm
interface RmOptions {
recursive?: boolean; // Remove directories and their contents. Default: false.
force?: boolean; // Suppress ENOENT errors. Default: false.
}
// Operation types for permission checking
type PermissionOp = "read" | "write" | "execute";
// Directory entry — platform-neutral equivalent of Node.js fs.Dirent
interface DirentEntry {
name: string; // Entry name (file/directory/symlink name within its parent)
isFile(): boolean; // Whether this entry is a regular file
isDirectory(): boolean; // Whether this entry is a directory
isSymbolicLink(): boolean; // Whether this entry is a symbolic link
}
// Stat result — platform-neutral equivalent of Node.js fs.Stats
// In Node.js, the LocalAdapter returns real fs.Stats objects (which satisfy this interface).
// Custom adapters return plain objects conforming to this interface.
interface StatResult {
mode: number; // POSIX permission bits and file type
uid: number; // Owner user ID
gid: number; // Owner group ID
size: number; // File size in bytes
atimeMs: number; // Last access time (Unix ms)
mtimeMs: number; // Last modification time (Unix ms)
ctimeMs: number; // Last status change time (Unix ms)
birthtimeMs: number; // Creation time (Unix ms)
isFile(): boolean; // Whether this is a regular file
isDirectory(): boolean; // Whether this is a directory
isSymbolicLink(): boolean; // Whether this is a symbolic link
}
// Entry type — what kind of filesystem entry a change applies to
type EntryType = "file" | "directory" | "symlink";
// A single entry in the change ledger
type ChangeEntry =
| { type: "create"; entryType: "file" | "directory"; path: string }
| { type: "update"; path: string }
| { type: "delete"; entryType: EntryType; path: string }
| { type: "rename"; from: string; to: string }
| { type: "chmod"; path: string; mode: number }
| { type: "chown"; path: string; uid: number; gid: number }
| { type: "symlink"; path: string; target: string };
// Full detail for a pending change, including content and metadata.
// Returned by getChangeDetail(). Represents the merged final state
// for a path (e.g. a create + chmod = a single detail with content and mode).
type ChangeDetail =
| { type: "create"; entryType: "file"; path: string; content: Uint8Array; mode: number; uid: number; gid: number }
| { type: "create"; entryType: "directory"; path: string; mode: number; uid: number; gid: number }
| { type: "update"; path: string; content: Uint8Array; mode: number; uid: number; gid: number }
| { type: "delete"; entryType: EntryType; path: string }
| { type: "rename"; from: string; to: string }
| { type: "chmod"; path: string; mode: number }
| { type: "chown"; path: string; uid: number; gid: number }
| { type: "symlink"; path: string; target: string };
// Error reported during best-effort apply
interface ApplyError {
change: ChangeEntry; // The change that failed
error: Error; // The underlying error
}
// Result of a best-effort apply
interface ApplyResult {
applied: number; // Number of changes successfully applied
errors: ApplyError[]; // Changes that failed (empty on full success)
}
// Error thrown during transactional apply
class TransactionError extends Error {
/** The change that triggered the failure. */
cause: ChangeEntry;
/** The underlying error from the failed operation. */
sourceError: Error;
/** Errors encountered while rolling back already-applied changes (empty if rollback succeeded). */
rollbackErrors: Array<{ change: ChangeEntry; error: Error }>;
}Adapters declare what they support so the layer can adapt its behaviour:
interface AdapterCapabilities {
permissions: boolean; // Whether the backend has a permission system
symlinks: boolean; // Whether the backend supports symlinks
caseSensitive: boolean; // Whether the backend is case-sensitive
}| Capability | When false |
|---|---|
permissions |
The layer skips permission checks (all operations are permitted). chmod, chown, and lchown still record changes in the ledger but no pre-validation occurs. |
symlinks |
symlink, readlink, lstat, and lchown throw ENOSYS (operation not supported). |
caseSensitive |
The layer normalises paths for internal lookups to match case-insensitive semantics. |
The default adapter. Wraps Node.js fs/promises and operates on the host filesystem. This adapter is Node.js-only — it cannot be used in browser environments. Most capabilities are static (permissions: true, symlinks: true), but caseSensitive is detected at createLayer() time by probing the filesystem at root. The layer calls an internal initialize(root) hook on the adapter after construction, allowing the adapter to perform root-specific detection before capabilities() is read.
import { LocalAdapter } from "@catmint-fs/core";
const adapter = new LocalAdapter();
// After createLayer({ root: "/some/path", adapter }):
// capabilities: { permissions: true, symlinks: true, caseSensitive: <detected from root> }A custom adapter implements the FsAdapter interface. Example skeleton for an S3-backed adapter:
import type { FsAdapter, AdapterCapabilities } from "@catmint-fs/core";
class S3Adapter implements FsAdapter {
constructor(private config: { bucket: string; region: string }) {}
capabilities(): AdapterCapabilities {
return {
permissions: false, // S3 uses IAM, not POSIX permissions
symlinks: false, // S3 has no symlink concept
caseSensitive: true, // S3 keys are case-sensitive
};
}
async readFile(path: string): Promise<Uint8Array> {
// GetObject from S3
}
async writeFile(path: string, content: Uint8Array): Promise<void> {
// PutObject to S3
}
// ... remaining methods
}Adapters that do not support a particular operation (e.g. symlink on S3) should throw an error with the code ENOSYS. The layer will also enforce this based on declared capabilities, but the adapter should be defensive.
| Event | Adapter responsibility |
|---|---|
createLayer() |
The layer calls initialize(root) if the method exists, then calls capabilities() once. Both results are cached for the layer's lifetime. initialize is optional — adapters that don't need root-specific setup can omit it. |
layer.apply() |
The layer calls the adapter's write/delete/rename/chmod/chown methods to apply changes. |
layer.dispose() |
The layer releases its reference to the adapter. If the adapter holds resources (connections, file handles), it should expose its own close() or dispose() method for the caller to invoke separately. The layer does not call this automatically — adapter lifecycle is owned by the caller. |
@catmint-fs/sqlite-adapter is a first-party FsAdapter implementation that stores filesystem content and metadata in a SQLite database. It is published as a separate package (@catmint-fs/sqlite-adapter) with a peer dependency on @catmint-fs/core.
| Use case | Benefit |
|---|---|
| Portable snapshots | An entire filesystem tree can be captured in a single .sqlite file — easy to copy, archive, or embed. |
| Offline / embedded tooling | No network dependency. Works anywhere Node.js runs. |
| Testing | Deterministic, isolated filesystem state without touching disk. Can be reset by deleting the database or using an in-memory SQLite instance. |
| Persistence across sessions | Unlike the in-memory virtual layer, a SQLite-backed layer survives process restarts. The database is the source of truth. |
| Transactional writes | SQLite's own transaction support makes apply() in transaction mode reliable — the adapter can wrap the entire apply in a single SQLite transaction. |
import { createLayer } from "@catmint-fs/core";
import { SqliteAdapter } from "@catmint-fs/sqlite-adapter";
// File-backed database
const adapter = new SqliteAdapter({
database: "/path/to/fs.sqlite",
});
// In-memory database (useful for tests)
const adapter = new SqliteAdapter({
database: ":memory:",
});
const layer = await createLayer({
root: "/",
adapter,
});
// Use the layer as normal
await layer.writeFile("hello.txt", "world");
await layer.apply();
// When done, close the adapter to release the database connection
await adapter.close();| Option | Type | Default | Description |
|---|---|---|---|
database |
string |
(required) | Path to the SQLite database file, or ":memory:" for an in-memory instance. |
caseSensitive |
boolean |
true |
Whether path lookups are case-sensitive. |
The adapter manages two tables. The schema is created automatically on first use (or when opening a new database).
Stores file and directory entries.
| Column | Type | Description |
|---|---|---|
path |
TEXT PRIMARY KEY |
Absolute path within the virtual filesystem. |
type |
TEXT |
Entry type: file, directory, or symlink. |
content |
BLOB |
File content. NULL for directories and symlinks. |
target |
TEXT |
Symlink target path. NULL for files and directories. |
mode |
INTEGER |
POSIX permission bits (e.g. 0o755). |
uid |
INTEGER |
Owner user ID. |
gid |
INTEGER |
Owner group ID. |
size |
INTEGER |
File size in bytes. Computed from content. |
created_at |
INTEGER |
Creation timestamp (Unix ms). |
modified_at |
INTEGER |
Last modification timestamp (Unix ms). |
Stores adapter-level metadata.
| Column | Type | Description |
|---|---|---|
key |
TEXT PRIMARY KEY |
Metadata key. |
value |
TEXT |
Metadata value. |
Reserved keys:
| Key | Purpose |
|---|---|
schema_version |
Tracks the schema version for future migrations. |
case_sensitive |
Records whether the database was created with case-sensitive or case-insensitive path semantics. |
adapter.capabilities();
// {
// permissions: true, // Mode, uid, gid are stored and checked
// symlinks: true, // Symlink entries are supported
// caseSensitive: <config> // Matches the caseSensitive option
// }The SQLite adapter supports the full permission model. Permission checks are performed against the stored mode, uid, and gid columns — there is no underlying OS filesystem to defer to, so the adapter is the authority.
Symlinks are stored as rows with type = 'symlink' and the target column set. The adapter resolves symlink chains when readFile or stat is called, and respects the same ELOOP limit as the layer.
To pre-populate a SQLite filesystem from a host directory:
import { SqliteAdapter } from "@catmint-fs/sqlite-adapter";
const adapter = new SqliteAdapter({ database: "snapshot.sqlite" });
await adapter.importFrom("/path/to/source");
// Recursively copies all files, directories, symlinks, and metadata
// from the host path into the database.
await adapter.close();importFrom is a convenience method on the adapter, not part of the FsAdapter interface.
await adapter.exportTo("/path/to/destination");
// Recursively writes all entries from the database to the host path.When layer.apply({ transaction: true }) is used with the SQLite adapter, the adapter wraps all write operations in a single SQLite transaction (BEGIN / COMMIT / ROLLBACK). This means rollback is handled natively by SQLite rather than by the layer's manual revert logic, making it both faster and more reliable.
@catmint-fs/git is a pure-TypeScript git implementation that performs all git operations through a @catmint-fs/core layer. It reads and writes git data structures (loose objects, packfiles, the index, refs, config) directly via the layer's filesystem API — there is no dependency on the git CLI binary, libgit2, or any other native git implementation.
Because all I/O goes through the layer, @catmint-fs/git works with any adapter that @catmint-fs/core supports. A git repository can live on the local filesystem, in a SQLite database, or on any custom backend — the git operations are identical.
| Reason | Detail |
|---|---|
| Portability | No native dependencies. Runs anywhere Node.js runs — including serverless, containers, and CI environments without git installed. |
| Browser compatibility | Uses only platform-neutral APIs (Uint8Array, ReadableStream, fetch()). Works in browsers when paired with a browser-compatible adapter (e.g. IndexedDB-backed), enabling in-browser git clients and web-based IDEs. |
| Virtual filesystem integration | Git operations go through the catmint layer, meaning changes can be staged, previewed, and applied just like any other filesystem operation. |
| Adapter-agnostic | A git repository backed by SQLite, S3, IndexedDB, or an in-memory adapter works without modification. |
| Standards-compatible | Reads and writes the same on-disk format as canonical git. Repositories created or modified by @catmint-fs/git are fully compatible with git, GitHub, GitLab, and any other standard git tooling. |
@catmint-fs/git implements the standard git on-disk format:
- Object storage: Loose objects (zlib-compressed, SHA-1 addressed) and packfiles (v2 format with OFS_DELTA and REF_DELTA). Objects are written as loose objects by default. Packfile reading is supported (required for clone/fetch), but packing/repacking of local objects is out of scope for the initial version.
- References: Loose refs (
refs/heads/*,refs/tags/*,refs/remotes/*) and packed-refs file. Symbolic refs (HEAD). - Index: Git index format v2 (the most widely supported version). Supports file mode, SHA-1, flags, and extended flags.
- Config: Standard
.git/configINI-like format. - Protocol: Git smart HTTP protocol for remote operations. Other transports (SSH, local) via a pluggable transport interface.
Git operations require SHA-1 hashing and zlib compression/decompression. Rather than depending on Node.js crypto and zlib modules, @catmint-fs/git uses platform-neutral implementations:
- SHA-1: Web Crypto API (
crypto.subtle.digest("SHA-1", ...)) is available in both Node.js and browsers. A pure-JS fallback may be included for environments without Web Crypto. - zlib: A portable zlib implementation (e.g. pako or fflate) is used for deflate/inflate. These are pure JavaScript and work in both environments.
┌────────────────────────────────────────────┐
│ User Code │
│ const repo = await openRepository(layer) │
│ await repo.commit(...) │
└─────────────────┬──────────────────────────┘
│
┌─────────────────▼──────────────────────────┐
│ @catmint-fs/git │
│ Repository, Index, ObjectDB, RefStore, │
│ MergeEngine, DiffEngine, Transport │
└─────────────────┬──────────────────────────┘
│ reads/writes via layer API
┌─────────────────▼──────────────────────────┐
│ @catmint-fs/core (Layer) │
│ Virtual filesystem overlay │
└─────────────────┬──────────────────────────┘
│ falls through to adapter
┌─────────────────▼──────────────────────────┐
│ FsAdapter │
│ LocalAdapter / SqliteAdapter / custom │
└────────────────────────────────────────────┘
The Repository object is the primary interface. It holds a reference to a layer and provides all git operations as async methods. Internally it delegates to specialised subsystems:
| Subsystem | Responsibility |
|---|---|
| ObjectDB | Reading and writing git objects (blobs, trees, commits, tags). Handles loose objects and packfile decoding/encoding. |
| RefStore | Managing refs (branches, tags, HEAD, symbolic refs). Reads/writes loose refs and packed-refs. |
| Index | Reading and writing the git index (staging area). Binary format v2 parsing and serialisation. |
| MergeEngine | Three-way merge of trees and blobs. Conflict detection and marker generation. |
| DiffEngine | Computing diffs between trees, commits, index, and working directory. |
| Transport | Network communication for fetch/push/clone via the smart HTTP protocol (or pluggable transports). |
import { createLayer, LocalAdapter } from "@catmint-fs/core";
import { initRepository, openRepository, cloneRepository } from "@catmint-fs/git";
// --- Initialize a new repository ---
const layer = await createLayer({ root: "/path/to/project" });
const repo = await initRepository(layer, {
defaultBranch: "main", // optional, defaults to "main"
bare: false, // optional, defaults to false
});
// --- Open an existing repository ---
const layer = await createLayer({ root: "/path/to/existing-repo" });
const repo = await openRepository(layer);
// Throws if no .git directory (or bare repo structure) is found.
// --- Clone a remote repository ---
const layer = await createLayer({ root: "/path/to/destination" });
const repo = await cloneRepository(layer, {
url: "https://github.com/user/repo.git",
depth: 1, // optional — shallow clone
branch: "main", // optional — checkout this branch (default: remote HEAD)
bare: false, // optional — bare clone
});| Function | Description |
|---|---|
initRepository(layer, options?) |
Creates a new git repository at the layer root. Writes .git/ structure (or bare structure). Throws REPO_NOT_EMPTY if a .git directory already exists. Returns a Repository. |
openRepository(layer) |
Opens an existing git repository at the layer root. Validates that the git directory structure exists. Returns a Repository. |
cloneRepository(layer, options) |
Clones a remote repository into the layer root. Throws REPO_NOT_EMPTY if the destination is not empty. Fetches objects, creates refs, and checks out the default branch (unless bare). Returns a Repository. |
| Option | Type | Default | Description |
|---|---|---|---|
defaultBranch |
string |
"main" |
Name of the initial branch. |
bare |
boolean |
false |
When true, creates a bare repository (no working tree, git data at root). |
| Option | Type | Default | Description |
|---|---|---|---|
url |
string |
(required) | Remote repository URL. |
branch |
string |
remote HEAD | Branch to check out after clone. |
depth |
number |
undefined |
Shallow clone depth. undefined means full clone. |
bare |
boolean |
false |
When true, creates a bare clone. |
transport |
GitTransport |
httpTransport() |
Transport to use for network communication. |
Once a Repository is obtained, all git operations are available as methods. The repository holds a reference to the layer and manages the .git directory internally.
// Create a new branch (does not switch to it)
await repo.createBranch("feature/login");
// Create from a specific commit
await repo.createBranch("feature/login", { startPoint: "abc1234" });
// Delete a branch
await repo.deleteBranch("feature/login");
// Force-delete an unmerged branch
await repo.deleteBranch("feature/login", { force: true });
// List branches
const branches = await repo.listBranches();
// branches: BranchInfo[]
// [
// { name: "main", current: true, commit: "abc1234...", upstream: "origin/main" },
// { name: "feature/login", current: false, commit: "def5678...", upstream: null },
// ]
const remoteBranches = await repo.listBranches({ remote: true });
// Lists remote-tracking branches (refs/remotes/*)
// Get current branch
const current = await repo.currentBranch();
// current: string | null (null if in detached HEAD state)
// Rename a branch
await repo.renameBranch("old-name", "new-name");
// Checkout a branch or commit
await repo.checkout("feature/login");
// Checkout and create if it doesn't exist
await repo.checkout("feature/new", { create: true });
// Checkout a specific commit (detached HEAD)
await repo.checkout("abc1234");
// Force checkout (discard uncommitted changes)
await repo.checkout("main", { force: true });| Method | Return type | Description |
|---|---|---|
createBranch(name, options?) |
Promise<void> |
Creates a new branch pointing at startPoint (default: HEAD). Throws if branch already exists (unless force: true). |
deleteBranch(name, options?) |
Promise<void> |
Deletes a branch. Throws CURRENT_BRANCH if the branch is currently checked out. Throws BRANCH_NOT_FULLY_MERGED if unmerged (unless force: true). |
listBranches(options?) |
Promise<BranchInfo[]> |
Lists local branches (or remote-tracking branches if remote: true). |
currentBranch() |
Promise<string | null> |
Returns the current branch name, or null if HEAD is detached. |
renameBranch(oldName, newName) |
Promise<void> |
Renames a branch. Updates the ref and any tracking configuration. |
checkout(ref, options?) |
Promise<void> |
Updates HEAD and the working tree to match the target ref. |
| Option | Type | Default | Description |
|---|---|---|---|
startPoint |
string |
"HEAD" |
Ref or OID to point the new branch at. |
force |
boolean |
false |
Overwrite the branch if it already exists. |
| Option | Type | Default | Description |
|---|---|---|---|
force |
boolean |
false |
Delete the branch even if it has unmerged commits. |
| Option | Type | Default | Description |
|---|---|---|---|
remote |
boolean |
false |
When true, lists remote-tracking branches (refs/remotes/*) instead of local branches. |
| Option | Type | Default | Description |
|---|---|---|---|
create |
boolean |
false |
Create the branch if it doesn't exist (equivalent to git checkout -b). |
force |
boolean |
false |
Discard uncommitted changes in the working tree. Without force, throws if there are uncommitted changes that would be overwritten. |
// Stage a file for commit
await repo.add("src/index.ts");
// Stage multiple files
await repo.add(["src/index.ts", "src/utils.ts"]);
// Stage all changes
await repo.add(".");
// Unstage a file (reset to HEAD state in index)
await repo.unstage("src/index.ts");
await repo.unstage(["src/index.ts", "src/utils.ts"]);
// Remove a file from the working tree and the index
await repo.remove("src/old-file.ts");
// Remove from index only (keep the working tree file)
await repo.remove("src/old-file.ts", { cached: true });
// Get repository status
const status = await repo.status();
// status: StatusEntry[]
// [
// { path: "src/index.ts", index: "modified", workingTree: "unmodified" },
// { path: "src/new-file.ts", index: "untracked", workingTree: "untracked" },
// { path: "src/deleted.ts", index: "deleted", workingTree: "deleted" },
// ]
// Status of a single file
const fileStatus = await repo.status("src/index.ts");
// fileStatus: StatusEntry | null
// List files tracked in the index
const tracked = await repo.listFiles();
// tracked: string[] — paths of all files in the index
// Check if a path is ignored by .gitignore
const ignored = await repo.isIgnored("node_modules/foo.js");
// ignored: boolean| Method | Return type | Description |
|---|---|---|
add(pathOrPaths) |
Promise<void> |
Stages files for the next commit. "." stages all changes. |
unstage(pathOrPaths) |
Promise<void> |
Removes files from the index, resetting them to their HEAD state. Does not modify the working tree. |
remove(pathOrPaths, options?) |
Promise<void> |
Removes files from the working tree and the index (or index only with { cached: true }). |
status(path?) |
Promise<StatusEntry[] | StatusEntry | null> |
Without arguments: returns status for all files. With a path: returns status for that file (or null if the path is not known to git). |
listFiles() |
Promise<string[]> |
Lists all files tracked in the index. |
isIgnored(path) |
Promise<boolean> |
Checks whether a path matches any .gitignore rule. |
| Option | Type | Default | Description |
|---|---|---|---|
cached |
boolean |
false |
When true, removes the file from the index only. The working tree file is preserved. |
The index and workingTree fields on StatusEntry use these values:
| Value | Meaning |
|---|---|
"unmodified" |
No changes relative to the comparison base (HEAD for index, index for workingTree). |
"modified" |
Content or mode has changed. |
"added" |
New file (exists in index/workingTree but not in the comparison base). |
"deleted" |
File removed. |
"renamed" |
File has been renamed (detected by content similarity). |
"untracked" |
File exists in the working tree but is not tracked by git. Only appears in workingTree. |
"ignored" |
File matches a .gitignore rule. Only appears in workingTree when explicitly requested. |
// Create a commit
const oid = await repo.commit({
message: "feat: add login page",
author: { name: "Jane Doe", email: "jane@example.com" },
});
// oid: string — the SHA-1 hash of the new commit
// Author/committer default to git config user.name / user.email if omitted
const oid = await repo.commit({ message: "fix: typo" });
// Allow empty commits (no staged changes)
const oid = await repo.commit({ message: "chore: empty", allowEmpty: true });
// View commit history
const commits = await repo.log();
// commits: CommitInfo[]
// [
// {
// oid: "abc1234...",
// message: "feat: add login page",
// author: { name: "Jane Doe", email: "jane@example.com", timestamp: 1700000000, timezoneOffset: -300 },
// committer: { name: "Jane Doe", email: "jane@example.com", timestamp: 1700000000, timezoneOffset: -300 },
// parents: ["def5678..."],
// tree: "789abcd...",
// },
// ...
// ]
// Limit log depth
const recent = await repo.log({ maxCount: 10 });
// Log from a specific ref
const branchLog = await repo.log({ ref: "feature/login" });
// Log for a specific file
const fileLog = await repo.log({ path: "src/index.ts" });
// Read a single commit by OID
const commit = await repo.readCommit("abc1234");
// commit: CommitInfo| Method | Return type | Description |
|---|---|---|
commit(options) |
Promise<string> |
Creates a new commit from the current index. Returns the commit OID. Throws if nothing is staged (unless allowEmpty: true). |
log(options?) |
Promise<CommitInfo[]> |
Returns commit history starting from ref (default: HEAD). |
readCommit(oid) |
Promise<CommitInfo> |
Reads a single commit object by its OID. |
| Option | Type | Default | Description |
|---|---|---|---|
message |
string |
(required) | Commit message. |
author |
GitIdentity |
from config | Author identity. Falls back to user.name / user.email in git config. |
committer |
GitIdentity |
same as author |
Committer identity. Defaults to author if omitted. |
allowEmpty |
boolean |
false |
Allow creating a commit with no changes. |
amend |
boolean |
false |
Replace the current HEAD commit instead of creating a new one. |
| Option | Type | Default | Description |
|---|---|---|---|
ref |
string |
"HEAD" |
Starting ref or OID. |
maxCount |
number |
undefined |
Maximum number of commits to return. |
path |
string |
undefined |
Only include commits that modified this path. |
since |
Date |
undefined |
Only include commits after this date. |
until |
Date |
undefined |
Only include commits before this date. |
// Add a remote
await repo.addRemote("origin", "https://github.com/user/repo.git");
// List remotes
const remotes = await repo.listRemotes();
// remotes: RemoteInfo[]
// [{ name: "origin", url: "https://github.com/user/repo.git" }]
// Delete a remote
await repo.deleteRemote("origin");
// Fetch from a remote
await repo.fetch("origin");
// Fetch a specific branch
await repo.fetch("origin", { branch: "main" });
// Pull (fetch + merge)
await repo.pull("origin", {
branch: "main",
author: { name: "Jane Doe", email: "jane@example.com" },
});
// Push to a remote
await repo.push("origin");
// Push a specific branch
await repo.push("origin", { branch: "main" });
// Force push
await repo.push("origin", { branch: "main", force: true });
// Push and set upstream
await repo.push("origin", { branch: "main", setUpstream: true });| Method | Return type | Description |
|---|---|---|
addRemote(name, url) |
Promise<void> |
Adds a named remote. Throws if the name already exists. |
listRemotes() |
Promise<RemoteInfo[]> |
Lists all configured remotes. |
deleteRemote(name) |
Promise<void> |
Removes a named remote and its tracking refs. |
fetch(remote, options?) |
Promise<FetchResult> |
Downloads objects and refs from the remote. |
pull(remote, options?) |
Promise<MergeResult> |
Fetches from the remote and merges into the current branch. If the fetch phase fails (e.g. TRANSPORT_ERROR), the error is thrown directly — no merge is attempted. |
push(remote, options?) |
Promise<PushResult> |
Uploads local refs and objects to the remote. |
| Option | Type | Default | Description |
|---|---|---|---|
branch |
string |
all branches | Fetch only this branch. |
depth |
number |
undefined |
Deepen or create a shallow clone to this depth. |
tags |
boolean |
true |
Also fetch tags. |
transport |
GitTransport |
repo default | Override the transport for this operation. |
Inherits all fetch options, plus:
| Option | Type | Default | Description |
|---|---|---|---|
branch |
string |
upstream of current branch | Remote branch to merge from. |
fastForwardOnly |
boolean |
false |
Only allow fast-forward merges. Throws if a merge commit would be required. |
author |
GitIdentity |
from config | Author/committer for the merge commit (if one is created). |
| Option | Type | Default | Description |
|---|---|---|---|
branch |
string |
current branch | Branch to push. |
force |
boolean |
false |
Force push (overwrite remote ref even if not a fast-forward). |
setUpstream |
boolean |
false |
Set the remote branch as the upstream for the local branch. |
tags |
boolean |
false |
Also push tags. |
transport |
GitTransport |
repo default | Override the transport for this operation. |
// Merge a branch into the current branch
const result = await repo.merge("feature/login");
// result: MergeResult
// Merge with a custom commit message
const result = await repo.merge("feature/login", {
message: "Merge feature/login into main",
author: { name: "Jane Doe", email: "jane@example.com" },
});
// Fast-forward only
const result = await repo.merge("feature/login", { fastForwardOnly: true });
// No fast-forward (always create a merge commit)
const result = await repo.merge("feature/login", { noFastForward: true });
// Abort a merge in progress (conflicts)
await repo.abortMerge();| Method | Return type | Description |
|---|---|---|
merge(ref, options?) |
Promise<MergeResult> |
Merges the target ref into the current branch. |
abortMerge() |
Promise<void> |
Aborts an in-progress merge and restores the pre-merge state. Throws NOT_MERGING if no merge is in progress. |
interface MergeResult {
/** How the merge was resolved. */
type: "fast-forward" | "merge-commit" | "already-up-to-date";
/** The resulting commit OID (absent for "already-up-to-date"). */
oid?: string;
/** Conflicted file paths (non-empty only if merge has conflicts). */
conflicts: string[];
}When conflicts are detected, the merge does not create a commit. Instead:
- Conflicted files are written to the working tree with standard conflict markers (
<<<<<<<,=======,>>>>>>>). - The index is updated with the three-way entries (stage 1 = base, stage 2 = ours, stage 3 = theirs).
MergeResult.conflictslists the conflicted paths.- The repository enters a
MERGINGstate (.git/MERGE_HEADexists). - The user resolves conflicts, stages the resolved files with
repo.add(), and callsrepo.commit()to complete the merge. - Alternatively, the user calls
repo.abortMerge()to abandon the merge.
The merge engine performs a recursive three-way merge:
- Find the merge base (common ancestor) of the two branches.
- If multiple merge bases exist, recursively merge them to produce a virtual base.
- Compare base → ours and base → theirs for every path.
- For paths modified on only one side: take that side's version.
- For paths modified on both sides: attempt a text-level three-way merge. If the changes overlap, mark the path as conflicted.
- Binary files (detected by null bytes in content) that are modified on both sides are always marked as conflicted — no content-level merge is attempted.
| Option | Type | Default | Description |
|---|---|---|---|
message |
string |
auto-generated | Commit message for the merge commit. |
author |
GitIdentity |
from config | Author/committer identity. |
fastForwardOnly |
boolean |
false |
Only allow fast-forward merges. |
noFastForward |
boolean |
false |
Always create a merge commit, even if fast-forward is possible. |
// Create a lightweight tag
await repo.createTag("v1.0.0");
// Create pointing at a specific commit
await repo.createTag("v1.0.0", { target: "abc1234" });
// Create an annotated tag
await repo.createTag("v1.0.0", {
message: "Release 1.0.0",
tagger: { name: "Jane Doe", email: "jane@example.com" },
});
// Delete a tag
await repo.deleteTag("v1.0.0");
// List tags
const tags = await repo.listTags();
// tags: TagInfo[]
// [
// { name: "v1.0.0", oid: "abc1234...", type: "annotated", message: "Release 1.0.0" },
// { name: "v0.9.0", oid: "def5678...", type: "lightweight" },
// ]| Method | Return type | Description |
|---|---|---|
createTag(name, options?) |
Promise<void> |
Creates a tag. If message is provided, creates an annotated tag object; otherwise a lightweight tag (direct ref). |
deleteTag(name) |
Promise<void> |
Deletes a tag. |
listTags() |
Promise<TagInfo[]> |
Lists all tags with their metadata. |
// Diff working tree against index (unstaged changes)
const diff = await repo.diff();
// diff: DiffResult
// Diff index against HEAD (staged changes)
const diff = await repo.diff({ staged: true });
// Diff between two refs
const diff = await repo.diff({ from: "main", to: "feature/login" });
// Diff for a specific path
const diff = await repo.diff({ path: "src/index.ts" });| Method | Return type | Description |
|---|---|---|
diff(options?) |
Promise<DiffResult> |
Computes differences between working tree, index, and/or commits. |
interface DiffResult {
files: DiffFile[];
}
interface DiffFile {
/** Path of the file. */
path: string;
/** Previous path (if renamed). */
oldPath?: string;
/** Type of change. */
status: "added" | "modified" | "deleted" | "renamed";
/** Whether the file is binary. */
binary: boolean;
/** Hunks (empty for binary files). */
hunks: DiffHunk[];
}
interface DiffHunk {
/** Header line (e.g. "@@ -1,5 +1,7 @@"). */
header: string;
/** Lines in the hunk. */
lines: DiffLine[];
}
interface DiffLine {
/** Line origin: "+" (added), "-" (removed), " " (context). */
origin: "+" | "-" | " ";
/** Line content (without the origin marker). */
content: string;
/** Line number in the old file (undefined for added lines). */
oldLineNumber?: number;
/** Line number in the new file (undefined for deleted lines). */
newLineNumber?: number;
}| Option | Type | Default | Description |
|---|---|---|---|
staged |
boolean |
false |
Compare index against HEAD instead of working tree against index. |
from |
string |
undefined |
Source ref or OID. When set, to is also required. |
to |
string |
undefined |
Target ref or OID. |
path |
string |
undefined |
Limit diff to a specific path. |
contextLines |
number |
3 |
Number of context lines around each change. |
// Stash working directory and index changes
await repo.stash();
// With a message
await repo.stash({ message: "WIP: login feature" });
// Include untracked files
await repo.stash({ includeUntracked: true });
// List stash entries
const stashes = await repo.listStashes();
// stashes: StashEntry[]
// [
// { index: 0, message: "WIP: login feature", oid: "abc1234..." },
// ]
// Apply the top stash entry (keep it in the stash)
await repo.applyStash();
// Apply a specific stash entry
await repo.applyStash({ index: 1 });
// Pop the top stash entry (apply and remove)
await repo.popStash();
await repo.popStash({ index: 1 });
// Drop a stash entry without applying
await repo.dropStash(0);| Method | Return type | Description |
|---|---|---|
stash(options?) |
Promise<void> |
Saves the current working tree and index state onto the stash stack. Resets the working tree to HEAD. Throws NOTHING_TO_STASH if there are no changes to stash. |
listStashes() |
Promise<StashEntry[]> |
Lists all stash entries (newest first). |
applyStash(options?) |
Promise<void> |
Applies a stash entry to the working tree without removing it from the stash. |
popStash(options?) |
Promise<void> |
Applies a stash entry and removes it from the stash. |
dropStash(index) |
Promise<void> |
Removes a stash entry without applying it. |
| Option | Type | Default | Description |
|---|---|---|---|
message |
string |
auto-generated | Message describing the stash entry. |
includeUntracked |
boolean |
false |
When true, also stashes untracked files. |
| Option | Type | Default | Description |
|---|---|---|---|
index |
number |
0 |
Index of the stash entry to apply/pop. 0 is the most recent. |
// Read a config value
const name = await repo.getConfig("user.name");
// name: string | null
// Set a config value
await repo.setConfig("user.name", "Jane Doe");
await repo.setConfig("user.email", "jane@example.com");
// Delete a config entry
await repo.deleteConfig("user.name");| Method | Return type | Description |
|---|---|---|
getConfig(key) |
Promise<string | null> |
Reads a config value from .git/config. Returns null if not set. |
setConfig(key, value) |
Promise<void> |
Writes a config value to .git/config. |
deleteConfig(key) |
Promise<void> |
Removes a config entry from .git/config. |
Config is read from and written to the repository-level .git/config only. Global and system config are not read — this is a deliberate choice because the filesystem may be virtual (e.g. SQLite-backed) with no access to the user's home directory.
// Resolve a ref to its OID
const oid = await repo.resolveRef("HEAD");
// oid: string — full SHA-1
// Resolve a branch name
const oid = await repo.resolveRef("refs/heads/main");
// Short names are expanded: "main" → "refs/heads/main"
const oid = await repo.resolveRef("main");
// List refs matching a prefix
const refs = await repo.listRefs("refs/heads/");
// refs: RefEntry[] — [{ name: "refs/heads/main", oid: "abc1234..." }, ...]
// Set upstream tracking
await repo.setUpstream("main", "origin/main");| Method | Return type | Description |
|---|---|---|
resolveRef(ref) |
Promise<string> |
Resolves a ref (branch name, tag, HEAD, full refname, or OID prefix) to a full SHA-1 OID. Follows symbolic refs. Throws if the ref cannot be resolved. |
listRefs(prefix?) |
Promise<RefEntry[]> |
Lists refs matching the given prefix. If no prefix, lists all refs. |
setUpstream(branch, upstream) |
Promise<void> |
Configures the upstream (tracking) branch for a local branch. |
setTransport(transport) |
void |
Sets the default transport used for all remote operations on this repository. Can be overridden per-operation. |
// Soft reset — move HEAD to a commit, keep index and working tree
await repo.reset("abc1234", { mode: "soft" });
// Mixed reset (default) — move HEAD, reset index, keep working tree
await repo.reset("HEAD~1");
// Hard reset — move HEAD, reset index and working tree
await repo.reset("HEAD~1", { mode: "hard" });
// Reset specific paths in the index (unstage)
await repo.reset("HEAD", { paths: ["src/index.ts"] });| Method | Return type | Description |
|---|---|---|
reset(ref, options?) |
Promise<void> |
Resets the current branch to the given ref. |
| Option | Type | Default | Description |
|---|---|---|---|
mode |
"soft" | "mixed" | "hard" |
"mixed" |
What to reset: soft = HEAD only, mixed = HEAD + index, hard = HEAD + index + working tree. |
paths |
string[] |
undefined |
When specified, only resets these paths in the index (mode is ignored). Equivalent to git reset -- <paths>. |
Remote operations (clone, fetch, push) require network communication. @catmint-fs/git defines a pluggable GitTransport interface and ships a default HTTP transport.
interface GitTransport {
/**
* Discover remote refs and capabilities.
* Implements the reference discovery phase of the git protocol.
*/
discover(url: string, service: "git-upload-pack" | "git-receive-pack"): Promise<TransportDiscoveryResult>;
/**
* Fetch objects from the remote.
* Sends "want" and "have" lines, receives a packfile.
*/
fetch(url: string, request: TransportFetchRequest): Promise<TransportFetchResponse>;
/**
* Push objects to the remote.
* Sends a packfile and ref update commands.
*/
push(url: string, request: TransportPushRequest): Promise<TransportPushResponse>;
}The default transport implements the git smart HTTP protocol (v2 where supported, falling back to v1). It uses fetch() (the Web API) for HTTP requests, requiring no additional dependencies. This makes it fully browser-compatible — remote operations (clone, fetch, push) work in both Node.js and browser environments without modification.
import { httpTransport } from "@catmint-fs/git";
// Default — no auth
const transport = httpTransport();
// With authentication
const transport = httpTransport({
auth: { username: "user", password: "token" },
});
// With custom headers
const transport = httpTransport({
headers: { "Authorization": "Bearer <token>" },
});
const repo = await cloneRepository(layer, {
url: "https://github.com/user/repo.git",
transport,
});| Option | Type | Description |
|---|---|---|
auth |
{ username: string; password: string } |
Basic authentication credentials. |
headers |
Record<string, string> |
Custom HTTP headers added to every request. |
Third-party packages can implement the GitTransport interface to support other protocols (SSH, local filesystem, custom). The transport is passed at clone/fetch/push time or set as a default on the repository.
// Set a default transport for the repository
repo.setTransport(customTransport);
// Override per-operation
await repo.fetch("origin", { transport: sshTransport({ key: "..." }) });The following are explicitly out of scope for the initial version:
- Rebase — complex multi-commit rewriting with conflict resolution at each step. Can be added later.
- Cherry-pick / revert — single-commit operations that can be built on the merge engine. Future addition.
- Blame — line-by-line commit attribution. Requires walking history per-line. Future addition.
- Submodules — adds significant complexity around nested repositories. Future addition.
- Worktrees — multiple working trees for a single repository. Future addition.
- Hooks — git hooks (pre-commit, post-merge, etc.) are not executed. The API is programmatic; callers can implement their own hook-like behaviour.
- Global/system git config — only repository-level
.git/configis read. The filesystem may be virtual. - Reflog — reference log tracking. Can be added later.
git gc/ repacking — automatic garbage collection and object repacking. Objects are written as loose objects. Manual pack/repack may be added later.- Partial clone / sparse checkout — advanced clone filtering. Future addition.
- Signed commits/tags — GPG or SSH signature creation and verification. Future addition.
- SSH transport — only HTTP transport is shipped. SSH can be added via the pluggable transport interface. SSH transport is inherently server-side (requires TCP socket access) and cannot work in browsers.
// Identity for author/committer/tagger (input — when creating commits/tags)
interface GitIdentity {
name: string;
email: string;
timestamp?: number; // Unix timestamp (seconds). Defaults to current time.
timezoneOffset?: number; // Minutes offset from UTC (e.g. -300 for EST). Defaults to local timezone.
}
// Identity as stored in git objects (output — when reading commits/tags).
// Same as GitIdentity but timestamp and timezoneOffset are always present.
interface GitIdentityFull {
name: string;
email: string;
timestamp: number; // Unix timestamp (seconds).
timezoneOffset: number; // Minutes offset from UTC.
}
// Branch information
interface BranchInfo {
name: string; // Branch name (e.g. "main", "feature/login")
current: boolean; // Whether this is the current (checked out) branch
commit: string; // OID of the commit the branch points to
upstream: string | null; // Upstream tracking branch (e.g. "origin/main") or null
}
// Remote information
interface RemoteInfo {
name: string; // Remote name (e.g. "origin")
url: string; // Remote URL
}
// Status of a single file
interface StatusEntry {
path: string;
index: StatusCode; // Status in the index relative to HEAD
workingTree: StatusCode; // Status in the working tree relative to the index
}
type StatusCode = "unmodified" | "modified" | "added" | "deleted" | "renamed" | "untracked" | "ignored";
// Commit information
interface CommitInfo {
oid: string; // SHA-1 hash of the commit object
message: string; // Full commit message
author: GitIdentityFull; // Author identity and timestamp
committer: GitIdentityFull; // Committer identity and timestamp
parents: string[]; // OIDs of parent commits
tree: string; // OID of the root tree object
}
// Tag information
interface TagInfo {
name: string;
oid: string; // OID the tag points to (commit OID for lightweight, tag object OID for annotated)
type: "lightweight" | "annotated";
message?: string; // Tag message (annotated tags only)
tagger?: GitIdentityFull; // Tagger identity (annotated tags only)
target?: string; // Target commit OID (annotated tags only — the tag object points here)
}
// Ref entry
interface RefEntry {
name: string; // Full ref name (e.g. "refs/heads/main")
oid: string; // OID the ref points to
}
// Stash entry
interface StashEntry {
index: number; // Stash index (0 = most recent)
message: string; // Stash message
oid: string; // OID of the stash commit
}
// Fetch result
interface FetchResult {
/** Refs that were updated. */
updated: Array<{ ref: string; oldOid: string | null; newOid: string }>;
}
// Push result (derived from TransportPushResponse — same shape, distinct type for the public API)
interface PushResult {
/** Whether the push was accepted. */
ok: boolean;
/** Per-ref results. */
refs: Array<{ ref: string; status: "ok" | "rejected"; reason?: string }>;
}
// Transport types
interface TransportDiscoveryResult {
refs: Array<{ name: string; oid: string }>;
capabilities: string[];
}
interface TransportFetchRequest {
wants: string[]; // OIDs to fetch
haves: string[]; // OIDs already present locally
depth?: number; // Shallow clone depth
}
interface TransportFetchResponse {
packfile: Uint8Array; // The received packfile
acks: string[]; // Acknowledged OIDs
}
interface TransportPushRequest {
updates: Array<{ ref: string; oldOid: string; newOid: string }>;
packfile: Uint8Array; // Packfile containing objects to push
}
interface TransportPushResponse {
ok: boolean;
refs: Array<{ ref: string; status: "ok" | "rejected"; reason?: string }>;
}Git-specific errors extend the error conventions from @catmint-fs/core:
| Code | When |
|---|---|
NOT_A_GIT_REPO |
openRepository called on a layer that does not contain a git repository. |
ALREADY_EXISTS |
Creating a branch, tag, or remote that already exists. |
NOT_FOUND |
Resolving a ref, branch, tag, or OID that does not exist. Also thrown by checkout when the target ref does not exist (and create is false). |
DIRTY_WORKING_TREE |
Checkout or merge would overwrite uncommitted changes and force was not specified. |
NOTHING_TO_COMMIT |
commit() called with no staged changes and allowEmpty is false. |
NOTHING_TO_STASH |
stash() called when there are no changes to stash. |
DETACHED_HEAD |
Operation requires a current branch but HEAD is detached (e.g. push without specifying a branch, setUpstream when HEAD is detached). |
CURRENT_BRANCH |
deleteBranch called on the branch that is currently checked out. |
NON_FAST_FORWARD |
Push rejected because the remote has commits not present locally. Use force: true to override. |
TRANSPORT_ERROR |
Network or protocol error during a remote operation. Wraps the underlying error. |
INVALID_OBJECT |
Corrupt or malformed git object encountered during read. |
UNRESOLVED_CONFLICT |
commit() called while merge conflicts exist in the index. |
BRANCH_NOT_FULLY_MERGED |
deleteBranch on a branch that has commits not reachable from HEAD (use force: true to override). |
NOT_MERGING |
abortMerge() called when no merge is in progress. |
REPO_NOT_EMPTY |
initRepository called on a layer root that already contains a .git directory, or cloneRepository into a non-empty directory. |
The layer enforces permissions at operation time, not at apply time. Permission checks are delegated to the adapter via checkPermission(). This is a deliberate design choice:
writeFilechecks that the parent directory is writable.rmchecks that the file is writable and the parent directory is writable.mkdirchecks that the parent directory is writable.readFilechecks that the file is readable (for fall-through reads).chmodchecks that the current process owns the file, or is running as root (matching POSIX semantics).chownchecks that the current process is running as root (matching POSIX semantics). On non-root processes, throwsEPERM.symlinkchecks that the parent directory of the link path is writable.lchownchecks that the current process is running as root (matching POSIX semantics). On non-root processes, throwsEPERM.
If the adapter declares capabilities.permissions === false, all permission checks are skipped. Operations are always permitted, and chmod/chown/lchown record changes in the ledger without pre-validation. This is appropriate for backends like S3 where permissions are managed externally (e.g. via IAM policies).
For files that only exist in the virtual layer (created within the session), permission checks are based on the virtual metadata assigned at creation time.
Once a chmod is recorded in the virtual layer, subsequent permission checks for that path use the virtual mode rather than the backing filesystem's mode. This means you can chmod a file to be writable and then writeFile to it, even if the backing file is read-only — the permission change and the write will both be applied together.
Errors mirror Node.js fs error conventions:
| Code | When |
|---|---|
ENOENT |
Path does not exist in virtual layer or host. |
EACCES |
Permission denied based on host permission check. |
EPERM |
Operation not permitted (e.g. chown by non-root process). |
EEXIST |
mkdir on an existing directory (when recursive is false), or symlink to an existing path. |
ENOTDIR |
Operation expects a directory but path is a file. |
EISDIR |
Operation expects a file but path is a directory (e.g. rm without { recursive: true } on a directory, or readFile on a directory). |
ELOOP |
Too many levels of symbolic links encountered when resolving a path. |
EINVAL |
readlink called on a path that is not a symlink, or root is not an absolute path. |
ENOTEMPTY |
rmdir on a non-empty directory. |
DISPOSED |
Any operation on a disposed layer. |
ENOSYS |
Operation not supported by the adapter (e.g. symlink on a backend that does not support symlinks). |
TRANSACTION_FAILED |
Transactional apply failed. Wraps the root cause and any rollback errors. |
- Concurrent host changes: The layer does not watch the backing filesystem. If the backing filesystem changes after the layer is created, reads will reflect the updated state for paths not overridden in the layer.
apply()operates on the backing filesystem as-is at apply time and does not attempt conflict detection. Conflict detection may be added in a future version. - Nested layers: Out of scope for v1. A single layer per root is the supported model.
- Large files: The virtual layer stores file contents in memory. For very large files, this may be a concern. A streaming or temp-file-backed strategy can be considered in a later version.
- Symlinks pointing outside the root: A symlink target may resolve to a path outside the layer's
root. The layer allows this — it does not sandbox symlink targets. The resolved path is read from / written to the backing filesystem as normal. This matches real filesystem behaviour, but callers should be aware of it if building sandboxed tools. - Host symlinks: If a path on the backing filesystem is already a symlink, the layer follows it transparently for
readFile,stat,writeFile, etc. The layer does not intercept or virtualise existing symlinks unless the user explicitly operates on them (e.g.rmon the symlink path, orreadlink). - Transaction rollback failures: If a rollback during transactional apply itself encounters errors (e.g. another process deleted a file mid-rollback), the
TransactionErrorincludes both the original cause and the rollback errors. The backing filesystem may be in a partially reverted state. The layer's pending changes are still preserved so the caller can inspect and recover. - Remote adapter latency: The layer does not batch or pipeline adapter calls. Each fall-through read is a separate adapter call. For high-latency backends, callers should consider pre-loading content or implementing caching within their adapter.
- Adapter errors: If the adapter throws an unexpected error (network failure, auth expiry, etc.), the layer surfaces it as-is without wrapping. The layer only wraps errors that it generates itself (e.g.
ENOENTfor a path deleted in the virtual layer). Adapter authors are responsible for providing meaningful error messages. - Adapter consistency: The layer assumes the adapter provides a consistent view of the backing filesystem for the duration of the session. If the adapter's backend is eventually consistent (e.g. S3), the caller should be aware that fall-through reads may return stale data.
The monorepo uses pnpm workspaces. The root pnpm-workspace.yaml defines the workspace packages:
# pnpm-workspace.yaml
packages:
- "packages/*"pnpm-workspace.yaml
package.json # Root — workspace scripts, shared devDependencies
tsconfig.json # Root — shared TypeScript config
packages/
core/
src/
index.ts # Public API entry point
layer.ts # Layer implementation
ledger.ts # Change tracking
permissions.ts # Permission checking logic
adapter.ts # FsAdapter interface and types
adapters/
local.ts # LocalAdapter (Node.js fs)
types.ts # Shared types and interfaces
tests/
package.json # name: @catmint-fs/core
tsconfig.json
sqlite-adapter/
src/
index.ts # Public API entry point
sqlite-adapter.ts # SqliteAdapter implementation
schema.ts # Table creation and migrations
import-export.ts # importFrom / exportTo helpers
tests/
package.json # name: @catmint-fs/sqlite-adapter, peer dep on @catmint-fs/core
tsconfig.json
git/
src/
index.ts # Public API entry point (initRepository, openRepository, cloneRepository, httpTransport)
repository.ts # Repository class — all git operations
object-db.ts # ObjectDB — loose objects and packfile read/write
ref-store.ts # RefStore — branches, tags, HEAD, packed-refs
index-file.ts # Index — git index (staging area) binary format v2
merge-engine.ts # MergeEngine — three-way merge, conflict detection
diff-engine.ts # DiffEngine — tree/commit/index/workdir diff
stash.ts # Stash save/apply/pop/drop logic
transport/
types.ts # GitTransport interface and transport types
http.ts # Built-in smart HTTP transport
pack/
read.ts # Packfile reader (v2 format, OFS_DELTA, REF_DELTA)
write.ts # Packfile writer (for push)
config.ts # Git config (.git/config) parser and writer
ignore.ts # .gitignore pattern matching
types.ts # Git-specific types (GitIdentity, CommitInfo, etc.)
errors.ts # Git-specific error codes
tests/
package.json # name: @catmint-fs/git, peer dep on @catmint-fs/core
tsconfig.json
All packages are published under the @catmint-fs scope. Inter-package dependencies use pnpm's workspace: protocol (e.g. "@catmint-fs/core": "workspace:*") during development, which resolves to the actual version at publish time.
All packages use Vitest as the test runner. Tests are co-located with their packages under packages/*/tests/. Coverage is collected with Vitest's built-in coverage provider (@vitest/coverage-v8).
90%+ line and branch coverage across @catmint-fs/core, @catmint-fs/sqlite-adapter, and @catmint-fs/git. Every error code in the Error Handling tables (both core and git) must have at least one test that triggers it.
Unit tests validate individual components in isolation. The backing adapter is replaced with a minimal in-memory stub where needed.
| Area | What to test |
|---|---|
| Ledger | Inserting, querying, and merging change entries. Verify that overwrites, deletes-after-create, chmod-after-create, etc. produce the correct merged state. |
| Fall-through reads | readFile, readdir, stat, lstat, readlink, exists delegate to the adapter when no virtual override exists. |
| Virtual overrides | After writeFile, mkdir, rm, rename, symlink, chmod, chown, lchown — reads return virtual state, not adapter state. |
| Permission enforcement | Operations that should check permissions (writeFile, rm, mkdir, readFile, chmod, chown, symlink, lchown) throw the correct error code when denied. Verify that capabilities.permissions === false skips all checks. |
| Change semantics | Every row in the Change Semantics table is a test case. Includes: write-to-existing, write-to-new, delete-existing, delete-virtual-only, readdir-with-mixed-changes, chmod/chown on deleted paths. |
| Rename semantics | Every row in the Rename Semantics table. Includes: read source after rename, read dest after rename, write to source after rename, write to dest after rename, delete dest after rename, rename to existing path. |
| Symlink semantics | Creating symlinks, read-through behavior (readFile, stat, lstat, readlink, exists, readdir), modifying symlinks (writeFile through symlink, rm, rename, chmod/chown follow-through, lchown on symlink itself), dangling symlinks, symlink chains up to and exceeding ELOOP limit. |
| Case sensitivity | With a case-insensitive stub adapter: write Foo.txt then read foo.txt returns content. With a case-sensitive stub: same operation throws ENOENT. |
getChanges() / getChangeDetail() |
Correct ChangeEntry and ChangeDetail shapes for all operation types. Verify merged state (e.g. create + chmod = single detail with content and mode). |
apply() — best-effort |
Changes are applied in the documented order. Failed operations produce ApplyError entries. Layer is reset after apply. |
apply() — transaction |
On failure: all prior changes are rolled back, layer is not reset. TransactionError contains root cause and rollback errors. On success: layer is reset. |
| Apply ordering | Directory creates are shallowest-first, directory deletes are deepest-first, other operations follow ledger insertion order. |
dispose() / reset() |
After dispose(), all operations throw DISPOSED. After reset(), the layer is empty but usable. |
| Error codes | Each error code (ENOENT, EACCES, EPERM, EEXIST, ENOTDIR, EISDIR, ELOOP, EINVAL, ENOTEMPTY, DISPOSED, ENOSYS, TRANSACTION_FAILED) is triggered by at least one test. |
Integration tests exercise the full lifecycle against real adapters. Each test creates a layer, performs operations, applies or disposes, and verifies the resulting state on the backing store.
These tests run against a real temporary directory on the host filesystem.
| Area | What to test |
|---|---|
| Full lifecycle | createLayer → file operations → apply() → verify files on disk → dispose(). |
| Apply correctness | After apply, the host directory matches the expected state: new files exist, deleted files are gone, renames are reflected, permissions are set, symlinks point to correct targets. |
| Transaction rollback | Simulate a failure mid-apply (e.g. write to a read-only path) and verify the host directory is restored to its original state. |
| Fall-through after apply | After apply and reset, reads fall through to the host and reflect the applied state. |
| Case sensitivity detection | Verify that LocalAdapter correctly detects the host filesystem's case sensitivity via initialize(). |
These tests run against both file-backed and in-memory (:memory:) SQLite databases.
| Area | What to test |
|---|---|
| Full lifecycle | Same as LocalAdapter but with SQLite as the backing store. |
| Schema creation | Opening a new database creates the files and metadata tables with the correct schema. |
importFrom / exportTo |
Round-trip: import a host directory, export to a different path, verify the exported tree matches the original. Includes files, directories, symlinks, permissions, and ownership. |
| SQLite transactions | apply({ transaction: true }) wraps operations in a SQLite transaction. On failure, the database is unchanged (verified by querying the files table). |
| Persistence | Write to a file-backed database, close the adapter, reopen it, and verify the data survives. |
| Case sensitivity config | A database created with caseSensitive: false treats Foo.txt and foo.txt as the same path. |
These tests verify behavior under unusual or adversarial conditions.
| Area | What to test |
|---|---|
Symlink ELOOP |
Create a symlink chain that exceeds the OS limit. Verify ELOOP is thrown on readFile, stat, and exists. |
| Deeply nested directories | Create and apply a deeply nested directory tree (e.g. 100 levels). Verify apply ordering handles it correctly. |
| Large file in virtual layer | Write a large file (e.g. 100 MB) to the virtual layer. Verify readFile, createReadStream, and apply handle it without corruption. |
| Many changes | Accumulate a large number of changes (e.g. 10,000 file creates) and verify apply() completes and getChanges() returns the correct count. |
| Concurrent host modification | Modify a host file after layer creation but before apply. Verify fall-through reads see the updated host content. Verify apply overwrites the host file with the virtual content. |
| Apply to disappeared path | Delete a host directory after layer creation. Verify that apply to that directory fails with the expected error and transaction mode rolls back. |
| Disposed layer | Call every method on a disposed layer and verify each throws DISPOSED. |
| Double apply | Call apply() twice in a row — second apply should be a no-op (no changes to apply after reset). |
| Reset then apply | Call reset() then apply() — should be a no-op. |
Unit tests validate git subsystems in isolation. An in-memory layer (using a minimal adapter) is used as the backing filesystem so tests run fast and deterministically.
| Area | What to test |
|---|---|
| ObjectDB — loose objects | Write and read blobs, trees, commits, and annotated tag objects. Verify SHA-1 hash correctness. Verify zlib compression. Verify correct header format ("blob <size>\0", etc.). |
| ObjectDB — packfiles | Read packfiles with OFS_DELTA and REF_DELTA entries. Resolve delta chains. Verify that objects resolved from packs match their loose equivalents. Write packfiles for push. |
| RefStore — loose refs | Create, read, update, and delete refs under refs/heads/, refs/tags/, refs/remotes/. Verify file content format (<oid>\n). |
| RefStore — symbolic refs | HEAD as a symbolic ref (ref: refs/heads/main\n). Detached HEAD (direct OID). Resolve through symbolic ref chains. |
| RefStore — packed-refs | Read and write the packed-refs file. Verify that loose refs take precedence over packed refs. |
| Index — binary format | Parse and serialise git index v2 format. Verify entry fields (mode, OID, flags, path). Round-trip: write then read produces identical entries. |
| Index — staging | add stages a file (computes blob OID, writes object, updates index entry). unstage resets an entry to HEAD state. remove deletes an entry. |
| Status | All StatusCode values are produced for the correct scenarios. Verify index-vs-HEAD and workingTree-vs-index comparisons. Untracked files. Ignored files (.gitignore matching). |
| Commit | Creates a tree from the index, creates a commit object with correct parent(s), updates the branch ref. Verify author/committer fields and timestamp. Verify allowEmpty and amend options. |
| Log | Walks the commit graph from a ref. Respects maxCount, since, until, path filters. Handles merge commits (multiple parents). |
| Branches | createBranch writes a ref. deleteBranch removes it. renameBranch moves the ref and updates config. listBranches enumerates refs/heads/ and refs/remotes/. currentBranch reads HEAD. |
| Checkout | Updates HEAD (symbolic for branch, detached for commit). Updates working tree files to match the target tree. Detects dirty working tree and throws DIRTY_WORKING_TREE unless force. create option creates a new branch. |
| Tags | Lightweight tags create a direct ref. Annotated tags create a tag object and a ref pointing to it. deleteTag removes the ref. listTags enumerates refs/tags/ and distinguishes lightweight from annotated. |
| Merge — fast-forward | When the current branch is an ancestor of the target, HEAD is advanced without a merge commit. |
| Merge — merge commit | Three-way merge of diverged branches. Correct tree content. Two parents. noFastForward forces a merge commit even when fast-forward is possible. |
| Merge — conflicts | Overlapping changes produce conflict markers in the working tree. Index has stage 1/2/3 entries. MergeResult.conflicts lists paths. abortMerge restores pre-merge state. |
| Merge — binary conflicts | Binary files modified on both sides are always conflicted (no content merge attempted). |
| Diff | Working tree vs index, index vs HEAD, commit vs commit. Correct DiffFile status, hunks, and line numbers. Binary file detection. Rename detection. Context lines option. |
| Stash | stash saves working tree + index state, resets to HEAD. applyStash restores. popStash applies and removes. dropStash removes without applying. listStashes returns entries in order. |
| Config | Parse .git/config INI format. getConfig reads values. setConfig writes values (creates section if needed). deleteConfig removes entries. Multi-level keys (e.g. remote.origin.url). |
| Ignore | .gitignore pattern matching: globs, negation (!), directory-only patterns (dir/), nested .gitignore files, comments, blank lines. |
| Reset | soft: moves HEAD only. mixed: moves HEAD and resets index. hard: moves HEAD, resets index and working tree. Path-specific reset unstages files without moving HEAD. |
| Ref resolution | Short names expand to full refs (main → refs/heads/main). Ambiguous refs follow git precedence order. OID prefixes expand to full OIDs. listRefs filters by prefix. setUpstream writes tracking config. |
| Remotes | addRemote writes to config. deleteRemote removes config section and tracking refs. listRemotes reads config. Throws ALREADY_EXISTS for duplicate remote names. |
| Transport — HTTP | Smart HTTP v1/v2 request/response formatting. Reference discovery parsing. Packfile negotiation (want/have). Authentication header injection. Error handling for non-200 responses. |
| Error codes | Each git error code (NOT_A_GIT_REPO, ALREADY_EXISTS, NOT_FOUND, DIRTY_WORKING_TREE, NOTHING_TO_COMMIT, NOTHING_TO_STASH, DETACHED_HEAD, CURRENT_BRANCH, NON_FAST_FORWARD, TRANSPORT_ERROR, INVALID_OBJECT, UNRESOLVED_CONFLICT, BRANCH_NOT_FULLY_MERGED, NOT_MERGING, REPO_NOT_EMPTY) is triggered by at least one test. |
Integration tests verify end-to-end git workflows against a real LocalAdapter (temp directory) to ensure compatibility with canonical git.
| Area | What to test |
|---|---|
| Init + commit + log | initRepository → write files → add → commit → log. Verify the resulting .git directory is a valid git repository readable by git log. |
| Canonical git compatibility | Repositories created by @catmint-fs/git can be read by the git CLI (git log, git status, git diff). Repositories created by the git CLI can be opened and operated on by @catmint-fs/git. |
| Branch lifecycle | Create branch → checkout → commit → checkout main → merge → delete branch. Verify refs and working tree at each step. |
| Merge with conflicts | Create diverged branches → merge → resolve conflicts → commit. Verify the merge commit has two parents and correct content. |
| Clone + fetch + push | Clone a repository (using a local transport or mock HTTP server), make changes, push back. Verify the remote has the new commits. Verify fetch retrieves new remote commits. |
| Shallow clone | Clone with depth: 1. Verify only one commit is present. Fetch to deepen. |
| Tags | Create lightweight and annotated tags. Push tags. Verify tags are readable by git tag -l and git show. |
| Stash round-trip | Modify files → stash → verify clean working tree → pop → verify modifications restored. |
| Diff accuracy | Generate diffs and compare output against git diff on the same repository. |
| SQLite-backed git | Run the same lifecycle tests with a SqliteAdapter layer. Verify the git operations produce the same results regardless of adapter. |
| Large repository | Clone or create a repository with 1000+ files and 100+ commits. Verify all operations complete correctly and performantly. |
packages/
core/
tests/
unit/
ledger.test.ts
layer-read.test.ts
layer-write.test.ts
layer-delete.test.ts
layer-rename.test.ts
layer-symlink.test.ts
layer-permissions.test.ts
layer-case-sensitivity.test.ts
changes.test.ts # getChanges / getChangeDetail
apply.test.ts # apply ordering, best-effort, transaction
dispose-reset.test.ts
integration/
local-adapter.test.ts
stress/
eloop.test.ts
large-files.test.ts
many-changes.test.ts
concurrent.test.ts
sqlite-adapter/
tests/
integration/
sqlite-adapter.test.ts
import-export.test.ts
sqlite-transactions.test.ts
persistence.test.ts
git/
tests/
unit/
object-db.test.ts # Loose objects, SHA-1 hashing, zlib
packfile.test.ts # Packfile read/write, delta resolution
ref-store.test.ts # Loose refs, symbolic refs, packed-refs
index-file.test.ts # Index binary format v2 parse/serialise
staging.test.ts # add, unstage, remove, status
commit.test.ts # Commit creation, log walking
branch.test.ts # Branch CRUD, checkout
tag.test.ts # Lightweight and annotated tags
merge.test.ts # Fast-forward, merge commit, conflicts
diff.test.ts # Diff engine — hunks, renames, binary
stash.test.ts # Stash push/pop/apply/drop/list
config.test.ts # .git/config parse and write
ignore.test.ts # .gitignore pattern matching
reset.test.ts # Soft, mixed, hard, path-specific
refs.test.ts # Ref resolution, expansion, listing
remote.test.ts # addRemote, deleteRemote, listRemotes
transport-http.test.ts # Smart HTTP protocol formatting and parsing
integration/
local-git.test.ts # Full lifecycle against LocalAdapter
compat.test.ts # Canonical git CLI compatibility
clone-fetch-push.test.ts # Remote operations with mock server
sqlite-git.test.ts # Git operations on SqliteAdapter layer
stress/
large-repo.test.ts # 1000+ files, 100+ commits
- A user can create a layer, perform file operations, inspect changes, and apply or dispose — all without unexpected side effects on the backing filesystem.
- Permission errors surface at operation time, not at apply time (when the adapter supports permissions).
- Fall-through reads are transparent — the user does not need to know whether a file is virtual or adapter-backed.
- The API is fully typed with no
anyescapes in the public surface. - Unit tests cover all change semantics and edge cases listed above.
- A custom
FsAdapterimplementation can be plugged in and used without modifying any@catmint-fs/coreinternals. - The
LocalAdapterpasses all tests. Third-party adapters can reuse the same test suite by providing their adapter instance. - The
SqliteAdapterpasses the shared adapter test suite and correctly handles file-backed and in-memory databases. importFromandexportToonSqliteAdapterproduce a faithful round-trip of a host directory tree.- A user can initialise, clone, and open git repositories through
@catmint-fs/gitusing any adapter supported by@catmint-fs/core. - Repositories created or modified by
@catmint-fs/gitare fully compatible with the canonicalgitCLI —git log,git status,git diff,git push,git pullall work correctly against them. - Repositories created by the canonical
gitCLI can be opened and operated on by@catmint-fs/gitwithout data loss or corruption. - The full git lifecycle (init → add → commit → branch → merge → push/fetch) works end-to-end on both
LocalAdapterandSqliteAdapter. - The
GitTransportinterface allows third-party transports to be plugged in without modifying@catmint-fs/gitinternals. - All git error codes have at least one test that triggers them.
@catmint-fs/core(layer API andFsAdapterinterface) and@catmint-fs/gitproduce no errors when bundled for browser environments — no Node.js-only imports (fs,path,crypto,zlib,stream) appear in the portable API surface.- A full git lifecycle (init → add → commit → branch → merge → clone → push/fetch) works in a browser environment with a browser-compatible adapter.