Skip to content

Conversation

@UkoeHB
Copy link

@UkoeHB UkoeHB commented Dec 3, 2021

No description provided.

@cbeck88
Copy link
Contributor

cbeck88 commented Dec 3, 2021

In Stellar, instead of Tombstone blocks, they have "time bounds" for every submitted transaction:

Time Bounds

The optional UNIX timestamp (in seconds), determined by ledger time, of a lower and upper bound of when this transaction will be valid. If a transaction is submitted too early or too late, it will fail to make it into the transaction set. maxTime equal 0 means that it’s not set. We highly advise for all transactions to use time bounds, and many SDKs enforce their usage. If a transaction doesn’t make it into the transaction set, it is kept around in memory in order to be added to the next transaction set on a best-effort basis. Because of this behavior, we highly advise that all transactions are created with time bounds in order to invalidate transactions after a certain amount of time, especially if you plan to resubmit your transaction at a later time.

And this is checked using the concept of ledger time.

More about this here: https://stellar.stackexchange.com/questions/1852/transaction-created-at-and-ledger-close-time

And here: https://stellar.stackexchange.com/questions/632/how-are-timestamps-deemed-invalid

Your approach involves something more complicated, where nodes propose and vote on block timestamps.

Can you compare your approach with how timestamps actually work in Stellar Consensus Protocol (per the Stellar foundation)? Why should we not just adopt their approach?

@cbeck88
Copy link
Contributor

cbeck88 commented Dec 4, 2021

Adding additional rounds of voting may have nontrivial consequences for the performance of the network and transaction finality times. If there is a simpler way that avoids that somehow, or folds the agreement on timestamps into the messages that the protocol is already sending, then it seems like that would be preferable

@UkoeHB
Copy link
Author

UkoeHB commented Dec 4, 2021

So I believe if a timestamp is less than the previous close time, or the timestamp is a minute ahead (a hard coded value) then the timestamp is deemed invalid.

Using a local judgment of validity (local time) violates the eventual liveness guarantee of SCP (different opinions about 'current time' can leave parts of the network on different sides of a valid/invalid question, with no way to progress without manual intervention). My proposal does not violate that guarantee. If there is a simpler way to satisfy that guarantee, I would be happy to update the proposal.

Adding additional rounds of voting may have nontrivial consequences for the performance of the network and transaction finality times. If there is a simpler way that avoids that somehow, or folds the agreement on timestamps into the messages that the protocol is already sending, then it seems like that would be preferable

Timestamps would travel alongside transaction statements, so a large part of the 'consensing on timestamps' occurs while 'consensing on transactions'. However, you are correct that on average there will be more rounds (exactly how many is very context-dependent).

@cbeck88
Copy link
Contributor

cbeck88 commented Dec 4, 2021

Using a local judgment of validity (local time) violates the eventual liveness guarantee of SCP (different opinions about 'current time' can leave parts of the network on different sides of a valid/invalid question, with no way to progress without manual intervention). My proposal does not violate that guarantee. If there is a simpler way to satisfy that guarantee, I would be happy to update the proposal.

I wonder how Stellar's code around this actually works -- I doubt they require manual intervention when local timestamps get out of sync? I am still trying to find more precise documentation around this.

We should have the answers to questions like that before we seriously consider implementing an MCIP like this IMO

@UkoeHB
Copy link
Author

UkoeHB commented Dec 10, 2021

I wonder how Stellar's code around this actually works -- I doubt they require manual intervention when local timestamps get out of sync? I am still trying to find more precise documentation around this.

IIRC their hard fork protocol also violates the eventual liveness guarantee, and their solution is 'if you don't upgrade in lockstep, don't blame us if there is a problem' (i.e. you need manual intervention to recover). So, it would not surprise me if manual intervention is their solution to timestamp issues (which they probably 'assume' will never happen in practice - an assumption I will not accept for my analysis).

@nick-mobilecoin
Copy link
Contributor

Closing this in favor of #67
Some text was taking from this for #67, but the overall approach is a different.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants