Skip to content
This repository was archived by the owner on Jan 28, 2019. It is now read-only.
This repository was archived by the owner on Jan 28, 2019. It is now read-only.

Transaction/Pipeline API #19

@arrdem

Description

@arrdem

Right now, all the Shelving stores sorta assume that they've got some in-memory state that's backed by a file which we may want to flush to ... eventually. Hence the flush operation. For networked stores, it'd be a good idea to have a transactions API which lets users batch writes for efficiency. Especially since content hashing means that we aren't dependent on a central system to dictate identifiers.

This also opens the gate to extending Shelving's model of tuples to include as Datomic does the transaction in which a tuple was created. Not something I have a use case for, but a possible neat addition.

This would suggest exposing some sort of write batching endpoint for implementations.

Being able to choose abort vs. clobber in the case of conflicts on records would be interesting.
Most of Shelving actually is just the set monoid under addition so transaction conflicts should be rare.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions