Update 15 September, 2022

This week we look at the question of authority. Who can do what, and what credentials do they require to do so? The topic comes about as we are looking to simplify the code, removing message filtering patterns that we’re no longer using and ensuring the fundamentals are logical and easily understood.

General progress

@anselme has implemented a gossip layer to handle missed instances when the new DKG system fails to terminate and is now looking at cases where we have concurrent split DKGs that terminate together. DKG seems pretty stable otherwise though.

And @roland has pretty much completed the move from SectionChain (a secure linked list) to the new design of Section DAG. A bit more testing and we’re there.

@qi.ma has been looking for potential bugs in FilesContainers, as the comnets highlighted some suspect version clashes recently.

We’ve also been digging into some client connection issues from nodes, which appear to drop some responses. And removing unnecessary serialisation steps around OperationId, lightening the load on nodes further.

Authority

If you think of authority in terms of power, you might think elders, the wily old greybeards, would have the most authority. After all, they are the ones that control everything that goes on in their section, including having the power of life and death of other nodes. You’d be wrong. When it comes to making changes on the network, individual nodes have no individual authority whatsoever. Adults have no say, whereas elders only have authority as a collective, by means of a threshold vote.

In fact, the most powerful actor is the client - the customer is king if you like. That’s because the client can do things like editing mutable data (containers) that it owns and signing the reissue of a DBC. The nodes are really just messengers, passing information about operations and judgements to and fro, and mostly doing what they’re told.

Why does this matter? Well, it matters because we want clear dividing lines over who can do what. As far as possible we want authority to be implicit in data types and operations and for that to be controlled by cryptography, not complex hierarchies, and we want the simplest possible authorisation structures.

Data types

Permissions are built into the data types.
Immutable data
When it comes to mutation, chunks are impervious to authority. There’s no authority on the planet that will let you change the data. Therefore we don’t need to worry about designing an authorisation logic for them.
However, to store a chunk we do need authority, in this case it’s provided by the Section in which it is to be stored and once granted this authority applies forever. SectionAuth is a valid section key plus a signature.

Mutable data
Storing containers requires SectionAuth, while mutating them requires ClientAuth - only the owner should be able to change their data. So, implicit in this data type is that it requires a client signature before it can be changed. ClientAuth is a valid client signature.

The other type of mutable data is the SectionTree, which is a tree branching out from Genesis. Here the situation is a little different as effectively there are multiple owners (all the sections), but each section can only add a leaf to its particular branch with its SectionAuth.

DBCs are a little more complex in that they require a client to authorise a reissue (ClientAuth) and a section to sign it off (SectionAuth), but again until it has those two bits of authority attached, a DBC is effectively useless.

The important thing here is that for data operations, nodes are mere message carriers, and the message is independent of the carrier. We don’t care where the message comes from, and we don’t care who signed it. We only care that the data element carries the right sort of signature (client or section) plus a key.

In the simplest example a chunk of data with a valid section signature key attached can fly around the universe, and when it returns the network must accept it, because a section signature on data means it exists forever.

Data operations

In REST API terms data operations look like this:
GET operations require no authorisation whatsoever.
PUT operations require SectionAuth for storage and a combination of ClientAuth and SectionAuth for payment.
POST operations require ClientAuth to mutate containers SectionAuth to mutate the individual leaves of the SectionDAG

Token transfers require a combination of ClientAuth and SectionAuth.

Network operations

Changes to the makeup of a section in terms of its elders and adults, splits, etc, all require SectionAuth. But while individual nodes don’t have any authority in these processes, the elders at least are not merely passive message carriers as they are with data. For example, if an elder notices a node behaving dysfunctionally it can force a vote on whether it should be penalised.

For this to work, nodes need to be identifiable in certain instances, meaning they must sign messages with their public keys. So if a client sends a request for a chunk and that chunk arrives late or there is some corruption, the signature is irrefutable cryptographic evidence against that node and the client can inform the elders that it’s a bad 'un.

And for DKG it’s vital that the messages sent between elders with their vote shares are authorised. The authority of the message is that of the sender and checked against their signature on the message. This is called NodeAuth.

OK, but what’s it all for?

By focusing on authority and removing unnecessary checks, particularly in messages, we are simplifying operations as far as possible and relying on cryptography and data types to do the hard work for us. We still have some unnecessary message signing hanging over from previous iterations of the network, and all of these have a performance cost as well as creating confusion. A clean, simple design is an efficient design.


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!

50 Likes

Easy first :joy:

13 Likes

First again, hahahaha.

13 Likes

Nice one. :wink: Keep hacking – and cleaning up the code! We’ll be there before we know it!

Cheers

13 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :racehorse:

13 Likes

what’s it all for? There’s a lot of economic stresses and strains atm and what we’ve become used to becoming brittle in places. Hopefully it doesn’t get too bad but I did wonder this week about the prospect of existing internet servers failing or being lost and the data on those. Blessed are those who make backups but getting to a place where we don’t need to worry about where the data is, will be nice++.

Safe as infrastructure :thinking:

Thanks again for all the progress and updates… good to see :+1:

18 Likes

This clears up many unanswered questions I still had, thank you!

8 Likes

Thx 4 the update Maidsafe devs

The write up is really informative, great progress made by the devs

Keep hacking super ants

9 Likes

Might want to turn this update into a tutorial or faq or piece of general documentation or something. Though it needs a link to the glossery for things like what defines a section and such. Like if something needs clientauth + secionauth then that begs the question what sectionauth is and what defines a section. Same for the other authorities. I know these sound like stupid questions but this whole coding process is very abstract at best. Near as I can understand as is you’ve got the clientauth which is the client, an individual’s authority, then you have a section which is a conclave of regional elders (I have yet to wrap my brain around how xor space works). And what is DAGauth? What is a DAG anyway? This is why we need a glossery with clear definitions.

4 Likes

We kinda do have a glossary and acronyms post but it needs updated - volunteers welcome

:slight_smile:
it’s a Wiki so anybody can update or comment

3 Likes

https://media.consensys.net/ever-wonder-how-merkle-trees-work-c2f8b7100ed3

1 Like

I have started to doubt the way the development is done at the moment. I mean the “launch as soon as possible” approach. I think it leads to all aspects of the networks being developed in sync, which is in many ways a good idea. But there is the problem, that it seems to be leading loss of incrementality in development, and that way all of the possible value realized all at once, but rather later than sooner.

I have to support my living costs by selling my crypto holdings regularly. I am now 100% MAID because I have sold everything else. And I have started to sell those too. It looks like I’m not going to have any left at launch, if it is going to take more than six months until that. Of course, if there would be some advancement, that would prove itself a bit more in a comnet -type of situtation, things could get different.

One such thing could be a network, that is able to hold itself up without doing much, or anything else. I mean something like the “no data” network, that was able to get past many splits several months ago:

I thought something like that was cooking, when I saw this PR, but it has not had any commits since over a month ago:

https://github.com/maidsafe/safe_network/pull/1387

So, what I wish now, is some stripped down version, that could hold itself up in a distributed way, meaning from machines from our homes. I know that without data, farming, tokens etc. the whole network would be meaningless, but it is also true that without the working structure, all the developement in data, DBC’s, farming, etc. are useless.

Another thing I’d like to change is developing for Linux/Win/Mac all at once. I am not sure about the magnitude of the effect, but it seems that quite often the CI tests are failing in one of the systems, and fixing the problem there can take hours / days. I have said that before, and I was refuted. It may happen again, and I may be wrong here, but the observation, that fixing some problems in Mac for example, when everything else are working, takes time is absolutely true. And when I look back and see all the solutions, that has not been working, I wonder how many hours was wasted in trying them on all the platforms?

So could we change the development towards less parallel approach, please? You already found it beneficial to take step back from multithreading, maybe it would be good idea put some threads aside in a metaphorical sense as well?

4 Likes

I posted these comments in hopes of getting an outside point of view about how the network is being developed. I think it could be very helpful to the team to get a different perspective on this. I did get a response which asked me who would give this outside perspective. My thoughts were maybe folk from the Mozilla Foundation or Rust Foundation could help or point us in the right direction. I get the feeling things have stagnated and an outside point of view would be very helpful. We seem no closer today than a year ago. When we ask when we are told when it is working. It’s been many many years of this same response. I’m not trying to be offensive or disrespectful to the team. I think a fresh set of eyes on this could change things dramatically if minds were open to input.

2 Likes

It seems from my layman perspective that the data and DBC work is almost there. I have not heard much about farming, though. I agree that a solid, stable testnet would be amazing. But maybe it makes sense to include data/DBCs in that testnet.

6 Likes

Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:


Privacy. Security. Freedom

8 Likes

From my perspective, it has always looked like something is almost there. That’s why I have sold other coins, not this one. Well, that and the fact, that I don’t like any other project. I don’t really understand, or subscribe to, their goals. :person_shrugging:

The thing that seems elusive to me is if the network is able to make decisions about it’s own structure or not. I am sure I am missing many dots, but the ones I see I connect roughly this way:

  • April 2021 we had a testnet, that was up for five days. It was unstable, and making wider use of AE was thought to be the solution.
  • Late 2021 it was realized, that AE is leaving the network in some kind of split brain situation, and consensus / membership stuff was thought to solve that.
  • Spring 2022 consensus / membership was realized to still leave the network in undecisive state in some situations.
  • June 2022 came the new idea to borrow code from Poanetwork and it was thought that:
  • Now we are talking about:
  • All the while when:

This confuses me, and makes me think I am not the only confused one around.

I think one the coolest progresses recently are those that allow better analytics about what is going on in the network. I’m talking about statemaps, flamegraphs and all that. I am all for them. And of course DBC’s and actually everything else is very good too. People are doing good work in their tracks, nothing to complain there. It just seems to me that the efforts could be organized better.

And I am not at all certain, that anything is “close”. I think that development strategies should be set so that there is as little time used in finding the wrong answers, until they hit the right one. Because one just cannot know what works, until it works. Nothing is ever “close”.

4 Likes

I think we all feel your frustration @Toivo. I imagine the team feel it most. All’s we can do is keep faith in the team and hope they overcome the unknown unknowns quickly when they arise. Sorry your circumstances mean you are having to sell MAID :pensive:

9 Likes

That was a kind response from you, thanks for that.

But I think I raise valid concerns and would like to hear some input on those too. Like is it really a good idea to develop on three platforms at once? Is there any metrics how much it has taken time to fix some bug on Win/Mac on to-be-discarded code, during the years? I bet months.

And to make sure, I am not criticizing of not having a direct route to the right solution. It just seems to me that developing on all the systems in parallel makes sense only if the path forward is clear. Different dead ends are to be expected, a natural part of the process. There is this persistent illusion of “being close” to solution, that is just false. You cannot estimate a distance to the solution, before you are certain that it really is the solution, and that is in hindsight.

And all that is certainly not the team’s or anyone else’s fault. It’s my poor judgement on how I should or should not invest my money. But of course I raise my opinion here at the moment, because I think that another way forward would be quicker to move the price up, than the current approach. You know, for a while I could keep the faith that eventually, in the not too distant future my economics could get better. Now I am the fan, and the stupidity of thinking that “the thing” is not is approaching has hit me full force. I can see it coming.

I do have faith in the individuals in the team and their general intelligence, problem solving abilities, persistence, morals… etc. I just find the approach at the moment to bee too all-encompassing. And I have flipped my opinion here. A couple years ago I spoke against “routing only” approach, because that seemed so dry to me, and maybe because I thought that problem would be solved sooner.

Or am I just skewed in my thinking, that the sections splitting etc. is THE thing?

3 Likes

I’m not the person to answer those questions and my guess is only team members would be able to. I was under the impression that we had all the parts in place and “just” needed to make them work together. It does seem to me that some of the parts don’t want to work together but what do I know :man_shrugging:t3: If parts don’t work together for me I just hit/cut/weld until they do :joy:

2 Likes