I’m not at all au fait with Etherium and won’t pretend to be. I don’t understand their smart contracts and quite probably the notion I have of them is completely different to how they actually work. Having said all this, this post doesn’t actually have anything to do with Etherium. I’m merely mentioning them because what I say next may well have factored into their design and has already been considered by people in MaidSafe and/or the SAFE network community.
This is not going to be another post on “illegal content” but it will be a minor factor. I’m going to try and delve into a way in which truly distributed applications could work. I don’t think any competing systems allow for this. The concepts are simple - the technology won’t be quite so simple but far from impossible.
I’ll start with a few statements I believe we’ll all will agree on:
- With large groups it’s impossible to have full consensus.
- Entities should not be forced to participate in actions they disagree with.
- Entities should be allowed privicy.
There will be many more statements that can be added!
It seems quite logical that if a system enforced contracts that it would mean that the system adhered to each of these statements. E.g. If you are a vegan then your machine shouldn’t be used to store/distribute content that promoted animal testing.
The network as it is doesn’t allow for distributed software to run on it. It allows for distributed data, not execution. In order to achieve this you’ve obviously got to have nodes on the network sacrifice CPU time. Not only this, you’ve got to have nodes on the network willing to risk allowing operations they’ve no knowledge of executing on them. Do you want to risk having some process/thread on your machine running in an infinite loop - especially if you’re getting no benefit from the process/thread to start with?
I don’t know all the details about how distributed websites are planned to work at the minute. However, my understanding is that the sites would pretty much be client heavy, with JS calls through to the SAFE API. I’m struggling to think how this can be done in a secure way with the hardware currently available. Things may be fine for the end user but how about the site operator? If sites aren’t tamper resistant then who’s going to create sites on the network?
For truly distributed and secure applications to function you need to be able to execute operations on neither a server nor client. These operations need to be executed on some arbitrary machine(s).
So, here we are again with contracts. Currently network users are assumed to to sacrifice diskspace for tokens and these tokens allow the users to “do stuff” - this is the default network contract. Diskspace is cheap and most people will be fine with this. Not all users will be happy sacrificing CPU time and taking the other risks involved with executing operations on their CPU. Also not all users will be happy helping to distribute X, Y and Z.
Entities that create websites on the network won’t want to trust users not to look at their JS and start messing with their datastores…this means you need distributed execution…which means you need contracts.
I’m very confident it’s possible to achieve truly distributed execution…and if done cleverly it could perform almost as well as (maybe better in some years) current centralised systems - obviously assuming there’s adequate resource within the network.