Please add this one thing to SAFE network

Hey all. Many of you know me from my activity here a year ago, when we were architecting our project Intercoin. At that time I had many discussions with the team and supporters of SAFE Network, and I hope the project has a big launch soon! How close are we?

At the end of the day, where SAFE seems to be about storing information in the network, Intercoin is more about how things change. So we need validators. In a chess game some referees to sign whether moves are valid. In a token we may want M of N latest “owners” sign a transaction. These things are called InterActions and they are taken in certain states of InterActivities.

I would love for our projects to be InterOperable (pardon the pun) and indeed SAFE network can add something to its protocol that strictly adds interoperability with MANY projects like ours without any loss in any security or features. That’s what I would like to describe now. I would like for the protocol to be open enough that maybe in the future Intercoin notaries can join the SAFE Network as vaults without LOSING capabilities. (Kind of like the website IndieWeb warns against a web monoculture, and how DID spec took over for identity interoperability lately.)

The MAIN thing I would like to ask the SAFE team to add is the ability to add and remove validators, and have some simple rules about adding/removing validators.

The content of the document stored in SAFE can be mutable and evolve and change. But as far as I understand, there is no mechanism to reject certain changes. Dat has this problem also. As far as I know it is impossible to “bolt it on from the inside”, because vaults will happily accept gibberish changes and go into eg a chess variation that is illegal. And the correct variation will now have to be built upon a wrong history which it repudiates. You may be able to tunnel some additional consensus mechanism through the inside but it’s wasteful, and unnecessary.

Validators can be added by simply adding their public keys. They can be encrypted on the level of the section or whatever mechanism checks for there to be no forks, and stores which hash / revision is the head. Or not encrypted.

The public keys are arbitrary, they can be part of public-private key pairs generated by some clients or bots for the express purpose of validating one or more transactions. The section and vaults don’t care. What they do care about is that they REJECT ANY CHANGE TO A MUTABLE DOCUMENT THAT HAS NOT BEEN SIGNED BY AT LEAST M OF N VALIDATORS.

In this way we can build lots of consensus algorithms and applications on top of SAFENetwork or make it interoperable with nodes from other projects that do cryptocurrency, like Intercoin, or perhaps some decentralized githubs of the world.

There has to be some protocol that is general enough to be interoperable with various M of N governance rules within a document. Maybe adding a new validator requires M1 of N validators, but removing one requires M2 of N validators. Maybe you can do it in bulk, which may be good for when a safecoin or some other document or token changes from being controlled by one corporation to another corporation. Maybe the list of validator public keys can be stored in other referenced documents, but that can be overkill.

The point is that the section must REJECT any updates that aren’t signed by at least M of N validators. If the section wants to additionally detect corruption in itself (eg when a consensus led to accepting illegal chess moves in the past) any vault can check this. Maybe it can’t do anything about this once a consensus has been established. But an honest vault will refuse to participate in such a consensus, and as long as a majority of the SAFE network section vaults aren’t compromised that should be enough.

PS: We could have gone further — if the vault detects that members of a section are colluding and accepting invalid transactions, it can maybe gossip this to others and kick them out of the section. And if MAJORITY the nodes in the section are compromised and continue not caring, they it can gossip to others and get them to migrate away from that section. I brought up this part before as Watchers, but it was rejected because there is no mechanism to migrate documents to another “healthy” section. In any case such migration could easily be done by the actual end-users of the document if they can somehow signal each other despite the section being totally compromised. That part can be done without changing SAFE Network. But the feature to reject updates that aren’t signed by M of N validators cannot be emulated by doing something else.

3 Likes

Just a quick clarification: the validators public signatures should reference the hash of a latest proposed transaction (which itself references a previous state of the document, but vaults don’t care about the transactions’s contents since it is encrypted, they only care about its hash).

So the hash + M of N validator signatures referencing it as being valid is what constitutes “this transaction is valid”. If the document has M > 0 public keys set as validators then any transaction with less than M signatures is rejected. If M = 0 (default case) that just reduces to what we have now.

1 Like

Wouldn’t this be an APP and use the multiple owners of ADs where its proposed for m of n to be able to control. Wouldn’t your validators, be APPS running on computers OR when computation is implemented validators could be computation running on the network.

This would mean it does not need to be a feature of the network but at the application layer where all these things should be. To make it a part of the core code increases complexity. About the only thing the network level needs is to implement user accessed mutuxes

2 Likes

It can’t be at the app layer unless you consider the app layer to be at the layer of vaults.

It must be at the model / data layer. Apps run on people’s computer. A rogue app may say let’s tell the vaults to process unvalidated transactions and mutate the document. And they just submit it and it passes consensus. No, it must be rejected by VAULTS before running consensus.

Yes this adds complexity but it’s tiny complexity. It is literally an extra check before accepting an instruction from an app just like you already check other signatures. If you DON’T do it, SAFE network vaults will not have a way of rejecting invalid transactions on mutable data.

1 Like

Dat project has same issue. They designed it assuming you own the Dat so you won’t screw it up yourself. They don’t even have BFT consensus like you guys have. It’s just to replicate things. Dat TRUSTS apps to do whatever transaction, gibberish or conflicts will just break the “swarm” consensus (what you call section consensus).

Right now SAFE is only good for storing arbitrary files and making arbitrary mutations to them, it has no notion of requiring mutation to be validated. Adding this closes the hole and makes SAFE interoperable with many systems.

1 Like

Isn’t that what the validators are for to validate the data and vote on it. That is all the vaults will be doing. And by doing at the APP/computational layer you can more validators and much more complex checking/validating than any section can provide. There is no ability to cross sections as the code does not exist and you would be asking for a major rewrite of the principles and coding. That is a redesign of the basic workings.

That is what the network does already. For a data object owned by say 100 owners then the operation will not occur unless the requirements are met to do that operation. (be it M of M OR N of M owners). Safe always checks signatures or the network could never be secure.

That as I said is a basic requirement of the network already when multiple owners is implemented (if it hasn’t already)

2 Likes

That doesn’t seem to be correct. Maybe I am missing something about how SAFE network checks signatures? For example when a token goes from owners ABC to owners DEF how would this be done at the app layer right now?

If even a single app user submits a transaction to a vault, the vault will simply mutate the data. Even if the transaction is not valid. Why wouldn’t it? Where are the requirements for the vault to check the latest owner signatures?

1 Like

What are you meaning by token?

1 Like

Let’s say it is a safecoin

1 Like

Safecoin is no longer a data object but a balance kept in a special field in your account blob. So a bad example because of that. And of course any signatures are verified.

If you talk of writing to an AD then the section verifies signatures to ensure the write request is valid and authorised.

If you are talking of transferring ownership of an AD then signatures are checked and verified

The network ONLY works by verifying signatures for any writing, ownership change, transfer of safecoin

Is that what you are talking of?

2 Likes

I guess if you support M of N owners signing then that’s the same thing yeah. But how does it work?

1 Like

It is supported right now with BLS keys. The owner of the data object is one public key that represents N users. The data mutation must be signed by M of these users to be accepted by the network.

Currently the partially signed mutation (when it has still less than M signatures) must be exchanged among the users outside the network. This could be done by an app.

6 Likes

Thanks for that explanation. But how are the keys actually checked before the transaction is committed? What is the process that allows it as long as M of N owners sign off and how does this work with the “one” public key owner?

1 Like

These keys aggregate, i.e. you get a signature and confirm it is of the correct keyset (BLS does this) and then add it to any other keys. When you have the right number of parts (the n of m) the signature is valid against the public key of that data (i.e. it looks like it was signed by the section private key (which does not exist)).

If there is one owner then it is a straight signature check. So think BLS keys can be single user keys which are just like ed25519 keys, or they can be aggregate keys. The aggregate keys can be added.

Nice thing with BLS is we can do distributed key generation (DKG) without a trusted dealer. So a really nice cryptographic feature there and much better (more secure) than adding ed keys in groups of known signers.

15 Likes

This is the point when I start believing in magic. I mean, it’s just effing cool that things like this can be done with cryptography. Mind blown.

5 Likes