SAFE network upgrades

Am I wrong in thinking that clients alone can have only a very few variances. Primarily UX (user experience). Clients alone wouldn’t have that much of impact on the network as a whole. If true then by all means fork until hands go numb. My only real concern is network wide mutation which I believe happens at the core. Sentinels, client managers, vaults, etc.

My above proposal while not perfect by any means, allows decentralized evolution of the core network while at the same time leaving room for total network divergence and preferred user experience. Does it not work? Please help me see :smiley:

1 Like

Vaults ought to deliver the files they are expected to deliver. They ought to answer yes when they should answer yes, no when they should answer no … etc… Sign off on what they should sign off on and refuse on things that they ought not sign off on… Those are things we can test for…

How they get there doesn’t matter so long as they do what they are supposed to do. C++ , Rust, Python, Raspberry Pi, Data center of virtual machines, Commodore Vic 20 in the basement. cellphone on a hip pocket in columbia – Whatever works… Whatever works best will gain Safecoin and reputation.

Some apps will benefit greatly by low latency. Others don’t care about latency much but need high bandwidth. The more variety is in the network the better it will perform in whichever way the users decide to take it.

3 Likes

If more than 12,5% of the vaults run a ‘fork’ or a new version incompatible with the majority, the network will start to fail due to the impossibility of consensus for the common operations. I’m assuming a consensus 28/32

Then, the network will continue failing until the 87,5% of the vaults have the new version installed.

To avoid the lack of consensus between the 12,5% and 87,5% each new version or fork has to be compatible with at least N previous versions.

Said that, I’m sure I misunderstood something about the consensus systems. If not, How could be possible that only the 12,5% of the net could break the system?

What?..Noooooooooooo…stop there!..lol :smiley:

I totally agree with @BenMS…if you are wanting the Network to follow Nature/Evolution, then why use Creationism as the fundamental reasoning.

The “purpose” is absent, they are instead functions/abilities etc I think. The “purpose”, in your way of meaning would be to “exist” or “live” - but it’s not a purpose.
However, this does get interesting when you consider you are the Creator, so are we really looking at a Creationist model and is that actually the best way?
Pick one… :smile:
Love the idea though…

1 Like

Don’t worry, I am not, but at any point in time evolution has taken any thing to a point, it’s not to say that is the end of the evolution of that thing (it’s not). I mean there is purpose in everything and finding the purpose goes a way to explain why it survives today. A thing without purpose is pretty useless I feel and a computer program without a core purpose is also not much use. Given a purpose it can make decision (albeit infantile at first) to satisfy that purpose. Not easy to explain, but basically the ability to do what you need to do to survive is key.

In SAFE it’s interesting the network can ask us humans to do some stuff, but not let us do other stuff that would compromise it (i.e. write your own vault and the network will reject it, muck your machine around and again … etc.) . If we maintain that “need” and allow it to be expressed then it can go further. At least we can find a purpose that can be aimed for.

Not easy at all to explain, but nothing to do with creationism or such, merely me looking from my angle. :wink: Not everyone does

6 Likes

Yup! Which is why I said before, that client modifications are inconsequential in regard to overall network performance. Mutations that improve performance, security, and anonymity is IMHO the goal. Any mutation that subtracts from any of those 3 but most importantly the last two, is bad business and therefore unacceptable.

Quite frankly most who eventually use the SAFE network within the the first 3-5 years will likely not immediately understand the implications of mutations that veer away from the initial SAFE protocol which provides users with the intended security, performance and anonymity @dirvine et al worked so hard to achieve. This is why my above proposal aims to achieve stability while remaining relatively flexible. Anything else runs a greater risk not being SAFE :slight_smile: At least in the short term (3-5 years).

3 Likes

As soon as a node leaves, it’s replaced by the next closest, so the group remains intact :wink:

1 Like

I do not dispute this…

I dispute this… fundamentally :smiley:
I would say there isn’t purpose in everything, there’s no purpose to find and it goes nowhere near an explanation as to why it survives today…lol
Either we’re talking at cross purposes, we have a different understanding of the word purpose or you are making a Creationist argument.,. :smiley:

Do you mean like a cow or a sheep or something, otherwise what are their “purposes” - it matters not to them whether you consider them useless - they don’t care.

Not “also” - this is because computers are created or made - for a “purpose” - the animal kingdom is not. They are different things.
All the “Computery” stuff sounds fine (what would I know)

I think I understand you,but to not mix the 2 ways of thinking, I’d say the Network “does whatever it needs to” in order to survive. Are you maybe talking about the immune system in some way?
Dunno…tricky one…not knowing the tech issues. … :smiley:

1 Like

Try to keep in mind that the average end user won’t understand half of this shit nor will they care.

Giving too much control over the network evolution by means of GET request voting or any other abuse-able system is a recipe for disaster. I’m not in favor of small quorum decision making either. A large group of intrinsically motivated people who’s votes, actions, and identities are protected by SAFE seem to be the best way to. An A.I with such massive intelligence necessary to analyze code presented to it and determine if it is in the best interest is a bit far off into the future. I mean look at how long it’s taking for the network alone to be launched. We need to look at solutions for every stage of the networks (SAFE) life. Decentralized maintainer federate (presented above) = short term. A.I. = Long term.

2 Likes

Lol… :… :smiley:

It doesn’t need to as far as I can see, - if you read @dirvine’s explanation you may change your mind as to time scales

A while back a wee lizard whose granny was a fish had a purpose, then it became a rat like thing, then a chimp like thing then us :smiley: A path, but at each stage they had a purpose, they were part of an ecosystem and shared in the (changing) eco system, whether to be eaten, eat or something else. I think we are at cross purposes really, I kinda think?

Yes, a way to think here (sideways) is you can program dna to transform e.coli into ethanol in a computer, the output being not a new species, but a copy of an existing thing, So where does life evolve from, a tree grows, a species splits and part of it feeds on the tree, the others find the tree poison, a new species is born. So it’s this kind of thinking, we can create stuff that has a purpose (the ethonol wants to do what all ethonol does, whatever that is) as part of the eco system. With enough planning then we can do amazing things.

What I mean with safe is that it is autonomous, so a node is almost useless (like a cell) until they join up (their purpose is to join and create something). Nodes are very similar then groups like molecules grow into bigger more useful parts (in terms of the whole picture). A normal computer program is on a bit of tin we create and can fully manipulate like pay dough. Left alone it just repeats (with exception of some AI).

Here we do not propose advanced AI, but more a lizard or cell, it has a purpose, it will evolve, but at it’s core it’s already different as it’s not only got a purpose it actually works to fulfil by interacting with us humans (farmers etc.) but the interesting part is that it’s in bits spread about the planet (hopefully) so is unique in it’s properties. It’s the fact these properties are encoded into it that is interesting. This is important.

[EDIT] Maybe to add evolution does create things that are useless or more accurately do not fit a purpose properly, these are the evolutionary failures and important to evolution. Maybe that’s a better answer, I don’t know, but evolution does create stuff that does not work. To me the current Internet (web really) is an evolutionary end and now should evolve, or die. A better example of the lizard perhaps, it will evolve and when it does the old one will have no purpose, except a place in the evolutionary chain.

2 Likes

Because it isn’t like we are making humans from a cesspool of proteins… It’s more like we are breeding dogs.

Sometimes we need a St. Bernard, sometimes a greyhound would do a better job. Sometimes all the client can afford to run is a Daschund.

Eugenics isn’t unethical in networking… If you can program a St. Bernard/Greyhound mix there is nothing illegitimate about that… But it is hard to breed one if you are only allowed to have one species at a time.

1 Like

I’m beginning to think we must be… :smiley:

Lol…I understand Evolution…what was it’s purpose in your view? This is the nub of the issue I think, whatever the confusion…lol

I’m saying I’m pretty sure they didn’t? Stop with all the mystery… :smiley:

Ahh…do you mean like primal instincts?

As you said previously, I think you are right about the word symbiotic to describe how eco-systems evolve and t’m really trying to think what you’re getting at.
If you follow the path of evolution right back to the Big Bang say…what is it’s purpose?
I’ll leave it here, I’m not just being argumentative, just trying to get on same page and find it interesting :smiley:

2 Likes

OK heading out for a beer now, but this is my point, it’s purpose is too complex for us to understand. This is why I get excited about how ants in a colony work, they follow really simple rules to satisfy their purpose (at any point in time as their persona changes, like our vaults). So this is why I don’t try to build what folks call AI systems as it’s way way to complex, but we can mimic simple things in nature if they are studied well. The things that have purpose (say to look after a colony) and apply it to a data protection network. So this is my story really, mimic nature, but not a human or something massively complex, but something that shows us how to connect lots of things in a way they serve a purpose for us. To protect data is very similar to protecting that colony / nest for ants. So really simple approach to bio-mimicry if you like, not watson or siri etc. but also not a spreadsheet or word processor :smiley: Something better.

Anyhow cya soon chap, cheers.

4 Likes

I’m sorry… and it could be due to lack of technical knowledge on my part, but I honestly don’t know what you are on about whatsoever…or what it has to do with anything etc…I’m not grasping the analogy or how it relates to the idea to autonomously update the network…Hang on…are you explaining how the Network evolvesupdates to me for some reason? :smiley:

Did you see that! Drops another bombshell, then runs away to the pub…lol :smiley:
It’s inner workings, emergent and symbiotic properties and effects etc maybe…not it’s purpose…lol
Enjoy your pint. :smiley:
Edit having read the rest of your post, it is definitely the case that the word “purpose” is the problem, not the concepts…phew…

1 Like

Humans are ridiculously error prone. Perfect code is hard to produce. Code that regulates evolutionary code is susceptible to further contamination. I mean look at these underhanded coding contests. It clearly shows how code could be designed in a way that acts maliciously but looks completely harmless (imagine a code HIV equivalent). Anti-virus, anti-rootkit, and anti-malware warriors know too well how ridiculous it is to claim that catching all malicious code before execution really is. Damn there impossible (with current knowledge and tech). Lets say we implement the cell idea proposed by @dirvine based on my loose understanding. These nodes/cells receive the contaminants which would normally be detected and rejected someway. The edge case: The malicious nature of the new code is extremely subtle and therefore left undetected for a while. The malicious entity who introduced it passively gathers undermining data later to be used for prosecution. This code continues to propagate throughout the network further compromising the network. Seeing this weakness, the adversary produces ever more subtle iterations of their malicious cocktail or worse yet it begins to mutate. Because of the inevitable network pseudo fragmentation, defense wars will have to be fought on multiple fronts. This is far more likely to cause catastrophic failure. Faith in the network will waiver and the public will scurry away. I don’t mind being on the bleeding edge of technology but such a sensitive system should not IMHO be the first to adopt such highly experimental technology (Yes I’m aware that SAFE is too but lets not layer it on). A fork would also be unwise as it would likely try to achieve the same security, anonymity, etc goals and thus be still susceptible and equally untrustworthy. Labeling the fork as EXTREMELY EXPERIMENTAL will not deter assholes from creating clients for this fork and giving it to unsuspecting Joe’s feeding on a false sense of security who then point to Maidsafe when shit hits the fan. :anguished:

A.I acting as an immune system does seem dreamy :relaxed:

2 Likes

Would it be fair to say that the network has a purpose and that is to serve its users as best it can? Obviously, this is a broad statement and is of limited use on its own. However, the purpose of the network becomes obvious through observation - through the choice of clients the users choose to run.

In short, we can’t define what are the best requirements - what the purpose is - in these cases. Instead, we can leave the users of the network to define them through the choices they make.

1 Like

If your’s is “loose” then mine’s baggy as f**k… :smiley:

Yes, that’s what I was getting, so asked about the immune system analogy.

I think the idea involves thinking about it as a Cancer…at some point a lump is found and is treated. If one cell is cancerous it’s not life threatening, until it reaches a certain point. The Network (organism) self checks for lumps and recognises the growth “at some point” before it is life threatening and treats it…
Ijust totally made all that up and am probably talking bollocks though… :smiley:

,

Some great ideas in this thread. We have the capacity to run a secure binary by virtue of the network design. Choosing which one has a number of approaches, but this core feature will help enormously.

3 Likes

Is there really any way for the network to prevent me from running whatever code I desire against it? I guess it could ask nicely for my version number and I could give it what it wants to hear. Who is going to stop me? If there is a “who” aren’t they at significant liability and criminal risk for running a network that may house any number of things that any number of governments don’t care for?

You have to assume that there are bad nodes out in any network, and you need to make your protocols immune to that. Because there will be.

I say allow experimentation and build a network that has an immune system strong enough to know who to trust and who to reject.

2 Likes