First of all nice update as always Keep up the good work!
What confuses me a little bit when parsec gets removed now is it a big deal to get something new in place? Or is it not worth talking about because it does not really effect the timeline?
First of all nice update as always Keep up the good work!
What confuses me a little bit when parsec gets removed now is it a big deal to get something new in place? Or is it not worth talking about because it does not really effect the timeline?
AIUI the something new is (nearly) already in place with CRDTs.
Onwards and upwards you super Devs. Look to the future not the past.
There is a saying: “Blame Game is for losers, so suck it up or die like the useless piece of crud you are” and “Everything is everyone’s fault”.
I admire how you are always ready to “suck it up” and move forward. We have no losers here, for sure
Rather than the whole network, perhaps the neighbours since when the network is whole the whole network will have 10’s/100’s of thousands of sections, maybe more when there are like 200 million nodes. (1 in 10 of connected computers out there)
Has there been any recent thought on any ways that sections can validate other sections, so that the network is resilient to takeover of a single section?
The first line of defense here is that nodes get assigned to a random section, thus the first step is that to start an attack you must have enough nodes so that random allocation gives you a possible majority somewhere. This threshold is a bit less than the majority share of the network. Your chances increase as you exceed this threshold.
Then, you must be a good citizen for long enough to have your nodes promoted to adults, then elders. All of them, i.e. every one of the necessary nodes you control in the target section must reach elder status, which means outlasting the majority of existing elder nodes while faithfully serving data.
Then, once your nodes know that they are in the majority of a section, they can carry out actions for that section (mess with safe balances in that section?, corrupt encrypted data?, etc.) as long as they are not caught doing something against basic rules.
Validating other sections actually only marginally increases security in this scenario because if an attacker can take one section, they are likely near threshold in many other sections as well. The primary defense is that a large portion of the network must be taken, sustained with good behaviour, then is still limited to ‘allowed by consensus rules’ operation.
Managed to bring things back to life by trimming it down to a mere 225,000 layers.
I’ve been following this project for some time and I am a little confused by the latest announcement and even after reading all the comments . Please can someone help me out. If Parsec is being deprecated does a new consensus algorithm need to be designed and proven to make the network work? If I recall Parsec was the final piece that made everything ‘work’.
Not in detail but the patterns in at2 with gossip do open some doors there for network wide consensus on certain issues. So there likely is ways to recover sections
We have and always have had consensus. PARSEC is an ordering mechanism. It orders everything fully and we don’t need that level of total order for all things. In fact the data types we have already have the ability to cater for any order, same with section membership. So we don’t need the big guns here in terms of an order everything component. Where we need order we can have it baked into the type that needs order as opposed to having another layer that decides the order.
Hope that makes sense ?
Thanks David,
So would it be fair to say essentially that the 1 consensus algorithm has been replaced by 3 methods of consensus:
CRDT : Consensus in maintaining integrity of Data
AT2 : Consensus in maintaining integrity of Safecoin
BLS : Consensus in validating trustworthiness of nodes
With Quic P2P replacing Crust last year, would this now mean that a much bigger chunk of the tech is based on stuff ‘off the shelf?’ I imagine that could certainly have benefits in and of itself, even if it’s a bit less glamorous.
That sounds pretty impressive!
That does make sense. Thank you
This is key and critical. As tech develops we must not stick to something we invented as the gold standard. I feel it’s always best to use what’s tested most. quic is a good example.
crdt - allows data updates to “merge” and stay consistent. So more consistency than consensus.
at2 - is touted as consensus == 1, but basically it offloads network consensus to the originator/client.
bls - allows key/signatures to be aggregated, so multisig.
Our consensus has actually always been group consensus. This is where the group (elders) quorum agree an event. There is a lot of confusion around consensus, in all projects and communities I think,. The word is overused. I can break it down in my head like this.
There is an agreement required to “do stuff on a network” and that needs the agreement of the network participants. So let’s call this Network Consensus or agreement. How that is arrived and and in my mind, more importantly verified is critical.
So we can have parsec (or any total order algorithmm even paxos etc.) say we agree this thing happened then this and use a quorum of members to agree that. I don’t know what it is but it happened in that order. So we can agree as a group A then B then C but we don’t know what they are. {Edit: Important here, you need the whole graph to validate A was agreed to, we can get the elders to sign with a bls key as well to keep cryptographic proof, but that slows parsec down even more0]
Then we have crdt or even at2 (as they both do this check). Here as A is requested, the elders check, is A valid? i.e. is the business logic sound? Then they check does A have a direct dependency on the last item in the data? Then is A signed by an authorised identity to change the data/make a payment etc. If so then I as an elder will agree to this proposed change. You the client who wishes to perform A then receive all these agreements until you have quorum of them. At that point you then post A plus the agreement of quorum A is valid and they will all apply that A change. [Edit: Note here each agreement is made by a bls signature share and the aggregating them gives us a single correct signature. This is good as that signature is always provable, without a graph. We do it by referencing the section key chain where each new set of members create a new key that is signed by the quorum of the previous set, so the change has provable cryptographic proof that there was agreement and it’s valid (i.e. can be stored/republished or whatever we wish).]
This is the main difference, we validate the proposition then agree to it. That agreement includes any order (causal) requirements in the data itself (not overall in the network).
Where this buys us a lot is concurrency, we can be changing millions of bits of data at a time and not ordering all claims (even invalid claims) but instead passing the “effort” back to the client (where it belongs).
So we end up with a faster, more concurrent network that has way less to do and any history/order is in each data item, not it a large graph representing all data and changes.
To be fair CRDT types really only became more formalised after 2011. This is even though it’s pretty obvious in so many ways, the patents we wrote in 2006 had all these assumptions, Tech generally takes about 20-30 years to adopt change like that. We just push that a bit further faster, but it can cause people hassle getting over the hill in thinking this way.
In this scenario how do you deal with bad elders or liars? Iirc, PARSEC had the ability to eject bad actors.
That was not switched on in parsec, it made it way too slow, but when you think about it you cannot say this change is OK when you don’t know what the change is. So in parsec and opaque votes I can request to pay myself 1000000safecoins from your wallet. As parsec does not know what that means, it will say yep we agree to this, it has parsec consensus/order etc. Then later some other part of the system needs to find out, hey this is invalid we need to drop it, but the network said it was valid, so a downwards cycle there.
However with straightforward consensus as above then A is signed by the originator, so cannot be tampered with. Then the Q could be what if the originator was colluding with some elders? That also fails as you need quorum agreement and unless you get quorum elders to collaborate then it fails, as shoudl all algorithms in this field.
Bad Elders is a huge area we have been pushing now for a while, invalid messages can be detected and when signed (they need to be) they are irrefutable. So a bad elder saying anything provably bad is possible to not only prove but forward to other nodes to get agreement to penalise that elder.
This is an area where we will always look to handle better as we progress. It is a huge area to define what is bad and how bad is it. Initially dropping bad messages is OK, but we need to act on them quite aggressively I feel.
Thanks, that clears a few things up!
I like this idea of moving away from whole network consensus, it feels more like how we do things in real life.
In a broader sense I think one of the problems with the internet is that we’re becoming condition to believe in this centralised ‘truth,’ whilst at the same time having to deal with all these infinite ‘alternative’ truths, and that logical dissonance is more than most of us can deal with! Better when we accepted that we have a limited sphere of knowledge and influence.
My one remaining question is the same as @jlpell asked, which I see you’ve now mostly answered. To ask perhaps a bit more about the most obvious element of it, does a node become an elder simply by consistently storing data for a length of time? What is the mechanism we know it is faithfully storing that data? I presume it has no responsibility for Safecoin or validating data changes.
A node becomes an elder by behaving over a long period of time. Initially mostly storing data and delivering that data on request, either to clients or to new joining nodes. The delivery needs to be provable so will initially go through the existing Elders to confirm it was delivered.
We also have a (not implemented in the current vault) mechanism that is quite cool. This is to confirm data is held and valid. There’s a load of fancy algorithms and verifiable delay functions and so on around, but it’s much simpler.
Elders agree to check A. So they take a bit of common “random” data (say the hash of the signed agree to check message) and send this to all holders. The holders prepend (to prevent hacks using the sponge property of modern hash algorithms) this to their copy of A and hash it. They return those hashes in an agreed timeframe and any who do not match can be penalised. We can also check by requesting the data ourselves and confirming it’s validity.
Adults don’t handle safecoin etc. as we don’t trust them enough yet.
This is directly related to @jlpell question about finding and penalising bad nodes as you point out.
Thanks David.
Obviously having even a single section taken over is quite a bad thing, but if other sections could do things like detect unauthorized safecoin transfers it would act as a large deterrent. Maybe that’s too much to ask, but it’s something I’ve hoped for