To answer the immediate question as directly and quickly as seems possible, there’s a ‘version’ value for each mutable data that’s incremented when it’s changed. This is used (as neo says) like a flag for committing changes and to prevent duplicate simultaneous changes.
See routing mutable_data L370 for where entry_version is incremented when updates happen.
This is actually referred to in a comment below on L429
For updates and deletes, the mutation is performed only if he entry version of the action is higher than the current version of the entry.
Owner is changed by calling clear()
then insert(new_owner)
- see L571 fn change_owner
Can the atomicity of this mechanism can be broken? A detailed read of the code is required (however is not actually performed in this post, despite the title of the next section).
Explaining By Reading The Code
Now, there’s also the broader issue of how this actually works and why a single node can’t just change the value.
The key word when discussing updates to mutable data is accumulation.
The transaction starts life being signed by the client then broadcast to the network. This is a very ‘raw’ state, not having accumulated any network ‘credibility’ besides that of the client signature. It’s routed to the appropriate section of the network, where it gradually accumulates (or doesn’t accumulate) credibility from nodes in that section. That credibility comes in the form of a signature from each vault in the section saying the transaction is valid according to that vault.
The place in code where this happens in the vault code mutable_data_cache; the whole file is really short and easy to read so there’s no particular methods to highlight.
The decision of each individual vault whether to sign-off on the credibility or not for a particular mutation happens in fn validate_concurrent_mutations
This decision starts life in each vault as a ‘pending write’ - see fn insert_pending_write.
Inserts the given mutation as a pending write. If the mutation doesn’t conflict with any existing pending mutations and is accepted (rejected
is false), returns MutationVote
to send to the other member of the group. Otherwise, returns None
.
The caching of mutabledata by each vault in the section is what keeps any single vault from changing it - all the other vaults in the section would reject any future mutation since it wouldn’t match their cached version. (yes a can of worms has just been opened)
The whole file at vault/data_manager/cache.rs is worth reading to better understand the accumulation process.
The accumulation module is fairly stable and may also be worth reading.
Anyhow, this is some pointers to technical entry-points if that’s the desired approach. It’s not an answer, more a finger pointing in roughly that direction.
Explaining By Analogy
Mutating data is a bit like rolling a big stone down a hill.
A transaction entering the network is considered ‘valid’ if it has the correct signature from the client. This is the first point of validation. It’s like getting the right stone and the right person to the top of the hill. Without that prerequisite, nothing further can happen.
The transaction is routed to the section responsible for it. This is like the person starting to push on the rock to roll it down the hill.
The first vault in the section to be handed the transaction checks if it’s valid both in itself and against their own cached value of the mutabledata object. This is like the rock slowly gathering momentum just off the top of the hill. Once the transaction reaches the section, it begins accumulating to reach quorum (like the stone begins accumulating momentum).
If these checks pass, the vault sends the transaction with its signed approval to the other vaults in the section. Those vaults do the same checks and report their signed approval to all the other vaults in the section. The transaction incrementally accumulates credibility like the stone incrementally accumulates momentum rolling down the hill.
When the transaction reaches quorum, the mutation in cache is ‘saved to disk’ of the vaults responsible for it. Any future vault in the section not caching the new value will be treated as misbehaving. This is like the stone hitting a tree on the way down the hill, reaching the inevitable conclusion of the journey.
Transactions are discrete because they are a ‘single data point’ (ie the broadcasting part) but the event of it being ‘saved’ is not. Despite that, the conclusion is conclusive once reached.
There can be multiple transactions pending accumulation for the same mutabledata at the same time (eg multiple stones simultaneously rolling down the hill aimed at the same tree). The one that reaches quorum first is the one that’s committed.
Explaining By Changing Perspective
Consider some other questions which dig into the nuance of the transaction mechanism (these are equally applicable to understanding the nuances of blockchain transactions)
-
What happens if multiple simultaneous transactions are competing to be committed (ie a race condition happens)?
-
How irreversible are transactions once committed (eg in bitcoin consider zero confirmations and orphan blocks)?
-
How would someone other than the owner spend the coins?
-
How can the consensus mechanism be broken and what is the effect of that (eg in bitcoin 51% vs hardfork are two different attacks on consensus with different effects)?
-
How are new coins created and what prevents them being created illegally?
-
Can coins be deleted or burned and how would this happen?
There are answers to all these questions but they’re too long for this particular post. Use them as new ways of looking into the problem - it might shed some similar light in a different way to the question of ‘how is data committed’.
Best of all (but technically challenging) try to break it. Run a private network and ‘steal’ some mutable data. It very well may be possible; only testing will confirm it.