Replies: 9 comments 16 replies
-
|
One possibility:
|
Beta Was this translation helpful? Give feedback.
-
|
I think it is stored in DETS. The use of eleveldb is as the backend for hashtree - and currently metadata reconciliation depends on hashtree. |
Beta Was this translation helpful? Give feedback.
-
|
I assume there's a good reason not to simply swap eleveldb for leveled? |
Beta Was this translation helpful? Give feedback.
-
|
Linking with previous discussion - #2 |
Beta Was this translation helpful? Give feedback.
-
Not only will the cluster's bucket-type have an All this to say, do we want to keep the behavior the same? As you know, my preference is that we just support bucket-types in configs and they're ready at start-up with the ability to reload config during runtime. I do not like the opaqueness of having a bucket-types configured at run-time. But I can create another discussion for that since that's a much larger change in terms of implementation and product feature. |
Beta Was this translation helpful? Give feedback.
-
|
I've looked a bit again at riak_core_broadcast. The behaviour is defined in the %% Return a two-tuple of message id and payload from a given broadcast
-callback broadcast_data(any()) -> {any(), any()}.
%% Given the message id and payload, merge the message in the local state.
%% If the message has already been received return `false', otherwise return `true'
-callback merge(any(), any()) -> boolean().
%% Return true if the message (given the message id) has already been received.
%% `false' otherwise
-callback is_stale(any()) -> boolean().
%% Return the message associated with the given message id. In some cases a message
%% has already been sent with information that subsumes the message associated with the given
%% message id. In this case, `stale' is returned.
-callback graft(any()) -> stale | {ok, any()} | {error, any()}.
%% Trigger an exchange between the local handler and the handler on the given node.
%% How the exchange is performed is not defined but it should be performed as a background
%% process and ensure that it delivers any messages missing on either the local or remote node.
%% The exchange does not need to account for messages in-flight when it is started or broadcast
%% during its operation. These can be taken care of in future exchanges.
-callback exchange(node()) -> {ok, pid()} | {error, term()}.The There is nothing inherent within There is then also the question of do we want to keep using For the ring broadcasting/gossipping the ring, For the purpose of metadata exchange is the relatively simple approach of |
Beta Was this translation helpful? Give feedback.
-
|
Following the virtual meet-up on 9th April 2025, the rough plan is.
Some points that were raised:
|
Beta Was this translation helpful? Give feedback.
-
|
Can we make sure that the logs contain lines for when new metadata is loaded showing conflicts with the local stored metadata, the source of the winning metadata and what the metadata that won was? Specifically, I am thinking of conflicting config files on different nodes and being able to track down and synchronise them. For no changes, having a single line confirming that: For new metadata, having a single line per metadata item with the new item's details: For updated metadata, having a single [warn] line per metadata item with the updated item's changes: On the node that actually has the changed config, make it clear it's from the config file or from the CLI: |
Beta Was this translation helpful? Give feedback.
-
|
Updated following the online discussion on 16/04/25. ScopeThe scope of the change was discussed. In essence two options:
The assumption is we will do (1), but there is still an opportunity to offer feedback in the next couple of days. The main drivers for choosing (1) were ease of transition, and potential "Chesterton Fence" issues with a bigger rewrite (particularly needing to know more about security metadata). Halfway-house options are complicated, and not desirable: e.g. hard to remove dvvset without impacting the use of dvv context in the messageID definition for Requirements for exchange v2Some requirements were stated for a v2 exchange:
Transition of bucket metadataThere was discussion about whether the existence of Nothing agreed about this, or whether there need to be some action on documenting/simplifying how to use Resolving sibling metadataThere is no action required on the use of LWW rather than the Replicating bucket propertiesAcknowledgement that not replicating has operational issues, but replicating also has an impact (in particular with regards to reconciling conflicting values, and also perhaps deliberately running different values for backup clusters). No action to be taken for now. Simplifying propertiesThe main candidate put forward for a bucket property to be removed due to hidden complexity was Setting properties via configurationA non-trivial decision, and so discussion postponed for another day. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
As of OpenRiak 3.2.5, cluster metadata is stored in
eleveldb, an increasingly untenable situation as the code ages.So, how do we migrate to a new storage mechanism, and what should that mechanism be?
Beta Was this translation helpful? Give feedback.
All reactions