So all this is happening under stable set work but is there still the stable set concept? I thought that the stable set was mainly comprised of trusted and stable elders. So will nodes with consistent uptime in a close group that would have been an Elder because of node aging just be considered an archive node?
Fabulous update! And @dirvine’s extensive explanations in the stream are really, really valuable. Looks like it’s starting to come together for real.
It’s no longer needed. It was a way to stabilise membership for consensus. Now we just use good old group consensus. We just repurposed the repo as a landing zone fro all this work, but we will likely cut it all over to the main repo in just over a weeks time.
Okay this was my initial assumption but then I remembered the mention of archive nodes being easy to add now. And wondered if stable set was the way to differentiate archive nodes.
Where my mind is now boggling is how much nodes need to be trusted and what will allow a node to be an archive node. Seems the weighting of trust or good behavior is gone with node aging stripped.
If there is enough redundancy in data, and the network can handle replication under heavy churn then really there is no concern correct? That simple if I’ve got that right I guess.
Pretty amazing if true.
This is a neat thing. We don’t need to trust an archive node. We ask for data, the data is either self validation (chunks) or client signed and DBC backed.
The archive node has it or does not have it, but it cannot create it (unless it buys it
)
To get these on the libp2p network, we just have nodes advertise themselves as archive nodes and they get every chunk stored and can retrieve it for us.
Same with DAG audit nodes. However here they would gossip out any doublespend attempts to the whole network. Thereby invalidating the DBC and helping to ensure the DBC is unspendable. So a backup if you like, one we likely don’t need but for the super careful types it will be beneficial to know there are layers of security above just group consensus.
i.e. we have our trust in data, but not in individual nodes.
Any idea how much the API will change in the coming weeks, I was about to take @happybeing’s hints and attempt to redo gooey in rust using the API instead. It is going to be a major learning curve for me so I don’t want to be banging my head unnecessarily
What’s the best place to start looking at the API’s currently available?
Well almost anyone would be better equipped to answer that as have never tried to use a API, I guess clues can be taken from the CLI and here
I love this update and the explanation. I feel sorry for you and the team that you had to go through so many twists and turns to get to where you are now but I’m sure it was worth it. It was never going to be a linear process like building a wall or walking a known trail. More like a jungle it seems!
I also love that there will be no more distinction between nodes that have been online for a long time and those that haven’t. I’d been thinking for a while that as it stood that after a few years the vast majority of the Elders would be running in datacentres or a Cloud service and be run by people very experienced in keeping infrastructure running. The centralisation would be extreme and make the system vulnerable to attack, continental scale internet disconnections and failure of entire AWS Regions (other cloud services are available).
Power cuts, hardware failure and house moves would keep all the ‘normal’ home users down at Adult or below. To say nothing of the difficulties of maybe not being able to do a OS upgrade or maybe even a quick router firmware update without losing age. There would be much less incentive for most potential users to add storage and compute to the network than if they can start earning right away and it doesn’t matter if their setup is offline for a bit.
I’m excited for you, the team and everyone here!
If it was me, I would start with the CLI and get familiar with the things you want to use, then look at how the CLI code does this. The CLI uses the Rust API so you may be able to cut and paste things you want to implement.
@Josh I doubt much will change, at least not in ways that are difficult to keep your code in synch with. You will learn a lot regardless, even if you give up in disgust . because most of your learning won’t be with the API but everything else you build on top of it.
So stop procrastinating and bite that bullet!
I think the biggest ‘changes’ will be in things built to make the API easier use. Not necessarily by MaidSafe but anyone starting with the API is likely to have to do extra work to do common operations that can be put in a library and shared to make things easier for those who follow.
I think that building gooey could be an excellent way to identify those operations and a step towards creating such a helper library.
I feel it will get simpler in many ways, but the network will do very little really. It’s gonna be the data type API’s that everyone will care about and mostly client side.
I would give it a couple of weeks and then we can see the direction much of this goes in. There should be a massive focus on API when we get this up and running.
Awesome updade and explanations!
Will this be exposed and user controllable?
Does this mean the network still needs to be single protocol (IPv4)? Or does the use of lib2p open doors for dualstack in the future?
It does, but that requires a lot of thinking as nodes need to be contactable with each other. So if one is ip6 and the close group all ip4 the network breaks as of now. Later though we may have some clever ideas, us or the libp2p community that is
This is a fantastic update. Waking up indeed. Perhaps time to follow more closely once again
Thank you for your input!
Indeed - will @Cantankerous ever post?
I am so jealous of that username, I want it for myself
Will @NotOpinionated do?
Why would you want it?
So folk know what to effin expect, ya dobber!!!
Would @VeryOpinionated work for you? Close to Cantankerous