SAFE launch criteria

Development should continue indefinitely, so what’s “launch” mean? For a data-oriented network, data integrity will be essential. SAFE should never fail one’s data (lose it, corrupt it, censor it, allow a privacy policy violation) after it leaves testnet and mainnet is live. Classically, it’s the 1.0.

After the “launch” target feature set is functionally complete, I would expect at least one year of rigorous continued testing, use, and development before leaving testnet. It would be too easy for a data fault glitch in a premature SAFE “launch” to damage its brand, possibly irreparably.

Understanding SAFE launch criteria also helps us set reasonable expectations and more accurate development time estimates.

Let’s discuss what criteria a “1.0” should satisfy.

13 Likes

Stable minimal viable product… basic network and perhaps with option to recover data if it falls over; some phoenix option, would obviously be an easy sell as offset for initial anxiety about network stability. I wonder in the updates the plan for launch and MVP is detailed well enough… intent is to get something solid out and build on it. It’s a good idea as that time alive while other elements are added will help confidence.

2 Likes

I think there are little sense to discuss it until fully decentralized testnet will be able to survive at least for several weeks.

No one knows what uses will be possible for network, so it is hard to predict which features will be most important.
So users should try different scenarios and look what is missing for them to be useful.
Then scenarios may be arranged by usefullness and then fixes and changes to test net can be prioritized for release.

This is the big one, integrity and consistency. With CRDT data types we also have this fantastic ability to work offline. So the ability to merge data is huge. If any 2 networks connected (even after a partition) then this not only says let’s have consistent data but let’s have provably correct network data even on the event of a total network collapse. (there is one gap and that’s the network/section wallets, but that can be solved as well, jut needs more thought and should not stop launch).

Everything else is secondary, but upgrades and protocol versioning will be extremely important but should be sorted post Fleming.

9 Likes

David, with offline working, will locally held copies be encrypted? Would be a nice bonus if they are.

5 Likes

I feel that apps should do the following (if we can force it then great). Create an encrypted file system that decrypts only on login. All data can be stored there for offline work. This is not likely all your data as that may not fit on a single machine, but certainly it shoudl be encrypted.

There were some such FUSE based filesystems called enc-fs IIRC, but that is the minimum we should accept. Problem is then there is some local data on your machine and that’s not always great, so that option needs probably to be off by default and opt-in.

Storing encrypted chunks (blob) data should be almost impossible to decrypt (if they were created via our self-encrypt lib). I see this as a fantastic opportunity. The network on restart pays folk to re-publish any missing network data. (Network data is data that has been signed by a network section in the past and that is provable).

8 Likes

My criteria.

Alpha 3/4:

  • Anyone can create a one-node network that can be accessed to others.
  • People can add and switch to each other’s “networks”.

Beta:

  • CRDT data types
  • Safe Vault hosting, larger network.

Candidate/Launch:

  • Farming

Please finish Alpha 3!!

Testing, gaming, and cracking Safe’s design and implementation should be easy and fun.

That would require, at least, a high level API/macro language affording:

  • Deployment of virtual Safe instances.
  • Triggering node behavior and inspecting internal state.
  • Scenario testing.

Bringing testing (validation) of Safe into high level abstractions would open the door to gamers, a crucial demographic to join security researchers in working to crack Safe and lay the groundwork for formal proof.

4 Likes

A slow path to release is essentially having data that has a level of certainty on the network, including moving forward (be this with OR without the release of the coin). Obviously different expectations are in place with either circumstance but from a users perspective we know what to expect.