It has a lot to do with group consensus. Years back the routing team were adament that group consensus was not enough. Then the network went down the path of sections. So sybil was an issue and node age was created to prevent sybil attacks. Then consensus was pushed, a classical consensus, i.e. async binary consensus, I hated that stuff and passionately still do. It’s complex and very unnatural to order the world in that way. It caused massive tensions between me and some engineers in particular.
Then recently looking much deeper at network security and removing the network ability to create became a threat model that was fraught with danger. How do we secure that network, how do we retain that security (so section chain). More complexity.
DBCs though, which are interestingly almost a mirror of the early designs in safe in 2006. We tried to patent that, not to protect it but at the time patents were how we showed it was possible, but you cannot patent any financial instrument in the UK. Some may remember when we said currency was just data, so secure data an you secure currency, well that’s DBCs. Anyway I digress. So DBC’s with a small addition gives us something quite powerful. It goes like this
- Have a reason field in the DBC.
- Have a DBC identifier attached to data.
So the reason field is populated by the data name (points to the payment basically)
The data points to the DBC.
That means we can have provably valid network data, without a network signature. So that was a big thing to discover a while back. @Anselme did that reason code a few weeks back.
Ok then Network valid data not signed by network. Therefor no need to keep chains of old decision makers (elders). In fact elders signatures cannot make data valid. So even old elders keys are useless, in fact so are current elder keys
This is good.
On a wee bit more.
So group consensus, the thing that could not be proven to be breakable (feasibly breakable) that was pushed against all those years ago, needed some further work. I had been debating for a long time now why this was good enough, why it works and why it has never been able to be proved incorrect. But that is a hard discussion. Hence my digressions into consensus assumptions a few weeks/months back (3f+1 node network with f byzantine nodes etc. - they don’t work)
So more researching and more digging. Then spent time looking at Avalanche (the dude there called our network a scam at the ICO, I remember these things). Anyway what does Avalance use? A Novel Metastable Consensus Protocol Family for Cryptocurrencies
let’s break that down. These protocols provide a strong probabilistic safety guarantee in the presence of Byzantine adversaries,
That word probabilistic, so NOT classical total order consensus, not 100% correct but probabilistic. Further down and what is it? Well basically it’s group consensus, but using the whole network as one group and then what they call subsampling (gossiping a way across the network).
Breaking that down further, it’s group consensus. It works, it’s on a network worth a ton of cash and operational. so that means we can prove group consensus as strong enough, but not only that. The way we do kademlia means we can way outperform such networks.
Deeper we go. So we have group consensus provably good enough
in the same way bitcoin is good enough
i.e. not 100% correct but correct enough to be reliable to its users. This is key.
That was all forming as we looked through some quinnn issues for QUIC, the issues we were looking at (IP based PKI) had some interesting comments from the libp2p team. Looking at them, they have come a long way. So they got QUIC as a first class citizen last week. Doing that meant they had better hole punching, did not need stream mux, noise protocol or their TLS1.2 on TCP stuff. Although they still have those things, but we don’t care. So they got what we wanted, QUIC plus DOS protection, hole punching an da kad implementation that works. Manner from heaven.
Put that all together and we have our data, our design, our registers and our DBC’s and a stable network layer.
i.e. We have the safe network.
Now jam
Libp2p has focussed on multiformat addresses etc. so protocols, serialisation etc. and they have done this to allow upgrades in a p2p network (yip).
They also have service broadcasting and discovery, i.e. archive nodes can happen in days of work.
Plus it works with etherium, avalance, ipfs, file coin and more, so well tested. I believe we will find some improvements in kad as we deeply know that stuff, but good to give back. So these guys have spent serious money on the network stack and hats off to them, we always struggled there and had work to do, a lot of work. So this is a blessing.
Summing it all up
The last few months have been hectic for the Engineers, I have been lost in space with most of the above, frequently jumping in to say hey look at this and so on. Last week it call came together.
This week we build the future and the future it was always meant to be. We learned a lot, we did the Eddison 10,000 ways it won’t work, but there we are. Back in familiar territory with Fekin proof the ants work. So now the codebase get’s super simple in terms of what we have to do. Assuming libp2p is as easy as it looks (and early results are very good) then this is going to be fast, very fast and it’s going to be stable, very stable.
I will soon sleep, to be honest I feel very strange in the last few days, a bit like excited beyond belief to a bit bewildered about what we have done, to super excited again. I feel I could get back on the road and talk to folk about this again, the way I used to. I feel more than excited by the network and what we can do. The focus on API, apps, usability, integrations etc.
I am also massively excited by an important business function. Whereas avalanche/algorand and other modern decentralised networks show what can be done and prove a lot (algorands verifiable random function is super cool), they focus on crypto. Money and integrating with other money and so on. For us though it’s data, what can we do now, what can we do with high speed secure and private data.
I think there is no end to what we can do and instead of focus on lots of other projects, we can focus data and our app devs and users and app dev users and so on. There is so much to be excited about that it is actually a bit overwhelming.
Sorry for the rant, but this has been the most painful 6 years of my life. The last 2 or 3 since we reduced the team has given us an amazing team, but we tried to make those changes to kadmelia etc. work. It’s the last few months that have shown us exactly how it all works though and last week, it all became crystal clear. I feel it for this current team, they pushed against a tide with little chance of overcoming complexity and now they don’t need to, but what a tough tough time they have had as we pushed so hard to get testnets working. Now though we can, but I feel I did not do right by the team, we (I) should have realised this sooner.
Anyway, things are what they are, I have my strength back and I have the motivation of 100 to get this done right now. The whole team will be the very same. I have not even described the whole thing to everyone yet. It has been that recent.
This week is gonna be epic.