This sounds cool, but so far everything unexplained is explained by the word random. I could argue all day that nothing is random. So if a vault is testing an upgrade at random periods. All you are testing is that updated version (B) can talk the same language as previous version (A). Thats what interfaces and unit tests can tell you. If (B) behaviour changes, (A) is never going to be able to agree with (B), so the network can’t upgrade until enough people are randomly using (B) reinforcing that (B) has the correct behaviour (This will take longer than 50% because they are all randomly trying the the new behaviour). So the tipping point is still basically adoption. Unless the change has no impact on behaviour, then the upgrade is seamless.
So at a technical level you are going to use dynamic linking to provide updates? A bit like plugins I guess. Are you only going to keep the previous and the ''update" version loaded?
@dirvine honestly I’m just trying to work out how you expected aspects of this network to actually work as you intend. I feel resources for describing these things are limited. They really should be aim at the ‘look how simple this is.’, not ‘look how clever we are’.
Yes, but which nodes do you pick from the ones that are waiting? I know, I know the answer is random
Okay, agreed, any device with an output is a server. Yet, the network cannot be expected to work efficiently with 100 home laptops, both consuming and serving data. You are going to need servers, so lets update that definition, an always-on computer device with dedicated resources for the sole purpose of serving others. Which you clearly expect to be an important part of the network because node-ageing trusts based on ‘on time’.
You totally just reminded me of one of my lectures I attended at uni, it was based on the same argument with a different example.