Some random thoughts on updating autonomous networks

From what I understand updates to the core code require that the a developer fulfils the predetermined requirements for increased efficiency in the 'autonomous network’, else the changes will be rejected. Any other (perhaps more fundamental) changes to the core code should (in theory from my understanding) be impossible, or else the network would be open to risk of subversion by bad actors.

It seems that there are however things which which would, in ultimate terms, improve the efficiency/success of the network, however this subjective value judgement is not available to the network and would need human subjectivity (or rather more meta thinking) to determine. For example perhaps the network would work better as a whole if the reward structure for app developers was higher, or perhaps not there at all; maybe there should be more Safecoin in the future etc. etc.

The net result of this line of thinking seems to be that there may be many aspects of an autonomous network which will forever be in its DNA, otherwise the network will be susceptible to some sort of attack.

Assuming this is correct I suspect that in time we’ll see a whole ecosystem of autonomous networks each with its own immutable intrinsic qualities. I hope/expect the SAFE network will have network effect on its side in this survival of the fittest battle which seems set to play out!

I’d be interested to hear from anyone in the know what can and can’t be tweaked by core devs, and also if the multitude assumptions I made here are complete BS or not :slight_smile:

1 Like

I had very similar thoughts the other day and was thinking of the following example:
a security fix, which slightly decreases the efficiency/performance of the network but is
absolutely necessary from a security point of view. how would it be possible to push that

1 Like

The most current ranking the network has would merely be a baseline of what is acceptable. I believe David Irvine has said previously rank of equal or greater amount so shouldn’t have to rely on new said efficiency just contain it and meet ranking standards. So what’s really amazing about the whole thing is that in the end it wouldn’t really rely on a single entity to push an update which also raises other questions but as long as the security and privacy are baked into the ranking I think the worry would be minimal

Edit: I completely missed your point about security update and the resulting rank @BambooGarden my apologies.

Would it be possible or desirable to have a security vulnerability exploited, say on a semi-permanent testnet. In a way which “shows” the network that the update is better.

absolutely, it would :wink:
problem is: how long to wait until deciding on what is better/safer?
a vulnerability could be exploited right away, in a year, in 100 years… one never knows.
something running OK for a while is no proof that it has no vulnerabilities imo.

This is the key issue to resolve. The autonomous network helps a lot, but this key primitive is the ‘pig in the poke’ right now. We will get an answer though between us all.

1 Like

Ok I thought you were talking about a scenario where a vulnerability is discovered, a patch comes out to fix it, but that patch is not as efficient under the autonomous metrics used by the SAFE network.

So my thought would be to have a way to run both vaults, the old vulnerable vault and the less efficient patch, in a testnet.

Then you could manually bombard the two vaults with exploitations using the known vulnerability, thereby proving to the network that the known vulnerability is a problem.

Obviously if the vulnerability is not known, then efficiency is the best plan.

1 Like