SAFE Network Dev Update - August 30, 2018

Actually I think in a list of fundamentals that there is a place to be more definite. Most people realise that if a fundamental is not possible at that time then either it is re-evaluated or kept as a future fundamental to be completed. Honestly I took them as more a list of fundamental goals

2 Likes

I believe that maidsafe are making claims that are not true. It’s okay if nobody else on the entire planet thinks the same, we are all unique after all. Now when they publish these fundamentals to a wider audience, they are aware of a view point that might be bigger than they expect. They can be prepared or change the way they deliver that message.

I agree.

Please note that no maidsafe engineer afaik has (or would afaik) made this claim.

Sorry about that one, I must admit to not recalling it, but will check now anyway.
Thanks again

11 Likes

This is what made me think so, from OP:

Not the exact same words, but I don’t know how you mean this to be possible without software identifying bugs. I guess it’s just imprecise language then since you find my interpretation alien :slight_smile:
So, I assume this means to say that a few critical functionalities will be evaluated, and upgrade “approval” will be conditioned on some premises for these.

But, as I said, I don’t consider it very important. These are just a few words of description.
I’d rather discuss how problems could be solved or tech be crafted utilising the network.

3 Likes

Great progress from the team! Nice to have these fundamentals down. Maybe the marketing team can discuss how these might be turned into more compelling benefits statements of the network? Feel like they fit into some categories. Possibly an infographic of sorts…just a thought. Onwards.

2 Likes

I think the big idea that @manyflaws :joy: is missing here is that SAFE simply makes it easier for anyone to join in and help host the data of the internet.

Sure, data centres will be good at this, but SAFE’s new design allows the internet to evolve past always relying on data centres forever.

It’s a step by step process.

Centralisation isn’t sustainable forever. This type of thing is destined to happen.

Just zoom out your perspective, and you will find there’s nothing to argue or resist.

…but cmon SAFE let’s re-release TEST 11 at least for now, so more people can fiddle with vaults again. Crypto has grown so much and there’s so many more eyes on us now with PARSEC etc, so I think a vaults-from-home network (in addition to A2) would get great press, project recognition, and developer attention. The network itself could expand much bigger now since our community and audience is so much bigger now. :+1:

5 Likes

Here some other new emoji’s as appreciation for the efforts of the MaidSafe Team:
:man_superhero: :woman_superhero:

5 Likes

I share your fear and additionally, I think that limiting new vaults will be LESS secure.

The simulations I have done in January/February show that the network growth is seriously impacted if the number of young nodes is limited to 1 in each section.

In my simulations I favored network growth with a more aggressive relocation strategy and by allowing 4 young nodes in a section. Together, these elements generated a network 4 times bigger and a much lower rejection rate (26% instead of 72%).

A bigger network is needed to counter the birthday paradox which implies that a lot less than 1/3 of the nodes is needed to disrupt the network. A greater rejection rate is less secure than a greater number of sections because an attacker needs dramatically less nodes, and this is not balanced by the need to try to connect each node more often. When I have time, I should add an additional “column” in my simulations, showing the number of nodes needed by an attacker to disrupt at least one section (but that’s not easy to compute).

Another argument is psychological: a greater rejection rate will deter some casual users from connecting a vault, but this won’t stop the attacker to try and retry until he succeeds. This will lower the total number of good nodes and so will increase the proportion of bad nodes of the determined attacker

8 Likes

Is it because it is theoretically impossible to reach a decentralized consensus on a global time? I would say that the bitcoin network proves the contrary, and an approximate precision of 10 minutes is good enough for many purposes.

Independently of this debate, why is this an objective? Let us suppose this is doable, why a forked safe network identical to original one except for the addition of a global time feature, would be less desirable than the one without it?

3 Likes

My take on it is that since it currently is not considered feasible, it has been elevated to a principle.

It is definitely conceivable that accuracy of time measurement tools could improve to such a degree that it would no longer be considered a problem to reach consensus on a time accurate enough for most or even all human needs.

In some cases we can design for tomorrow, in some cases we set up principles based on tech of today.

3 Likes

At the risk of looking stupid (this is not my field at all, but somehow cannae help myself wondering about solutions,) would it help the birthday thing at all to divide elders into (random) categories and require groups to have a representative from each category, rather than just a total number of elders in the group? eg. if minimum elders in a section was 8, then (to control all elders in a section, for the sake of illustration) attackers would not just have to have 8 with the same birthday, but 8 with the same birthday, one born in each of the 1920s, 30s, 40s 50s 60s 70s and 80s. Suspect if there’s anything in it the experts have already thought of it, but feels intuitively like it would help (ie. hinder) the probability of all an attacker’s elders lining up in a group, but I don’t have the maths skills to test it out, and my intuitions have a long history of being mostly wrong and very occasionally right!

The other idea I had today (which admittedly maybe works against your idea that restricting the speed of growth of the network makes it less secure) was to require a potential vault host to spend time as a client on the network, similar to the current system of access to Alpha 2 (but for vaults rather than clients.) This would obviously if it worked slow down greatly an attacker trying to build up VMs, and could perhaps work along with an invitation only scheme in the early days, which could double up as a viral marketing tactic!!!

Apologies don’t mean to step on any toes of the experts here or waste anyone’s time, just really fascinated by things like this as a puzzle.

1 Like

I found in my early sims that is correct, but the other effect was a reduction in the bad actors infiltration into sections. The attack had to be much longer in order to subvert just one section.

Anyhow I am going to do some more later on. The sims I did showed like yours that the network has to be large indeed before an attack in order to not have a section subverted. An issue with section splitting though still makes a large network vulnerable. So when I get back to it I am going to see effects of split parameters and network size and infant node numbers. There are some other factors to incorporate that have come to light in more recent discussions that will also have an effect on bad actor infiltration.

I was under the impression that if its not needed then why have it. Protocols do not need to know the real time of day in order to operate and do actually operate more efficiently/reliability if time is not built in. Obviously local time for timeout values is needed. For higher protocol use there is event sequencing to keep things in order and in fact this is a “sort” of time.

1 Like

Anyone wants to start a poll when Alpha 3 will be announced :blush:

2 Likes

…due to current problems with getting time right, which is a technical limitation.
If there was a reliable way to have same time, there would be no reason to not use it.

For protocols there is no reason for it.

The important thing for the section is events and the order of them. So using event (numbers) is better than time which would require its own method of syncing.

For lower level protocols time is not used. TCP/IP for instance does not use time. While certificates and https handshaking does use time because it has no other way to verify when certificates start/expire. Time there is based on the server time and the browser’s PC time. But it has its problems and SAFE does not need any of that.

I think you are missing the whole point.
That is exactly what I am saying. Currently there are technical limitations to the time measurement.

It seems you are thinking at this from today’s perspective, which is not the point.

If there was no such technical problem, it would serve as a perfect sequence order tool for anything on earth. If you had a timestamp, you would simply know the order, without needing to have seen the order.

And if you had that, then requiring to see the event order would be more complicated, and because of that, there would have been no reason not to use it (this other new shiny sequence tool, exact time).

You can read my post again, I say that you could conceive of such technical advances being made (implicitly: within not too distant future, otherwise it is just theoretical).

1 Like

That is what I was answering. Its not necessary and events are better for consensus and not needed or useful for transport protocols.

I am referring to core code.

We’ll be going into these points in more detail next week, adding some additional context for newer members of our community.

I hope this will cover the reasoning behind point 12 as well. I’m also guessing this reasoning is that the risk that a global ‘Safe Network’-time will compromise other, more important points on the list is seen as too high.

100% on the mark. I don’t think anyone would be against a global counter/clock or similar, but we do not see that as something we need to solve for v1. It will require a large lengthy discussion for sure.

4 Likes