Sharon
It’s been a heartbreaking few weeks for us as we heard of Sharon’s issues during pregnancy. She had to have wee baby Marsha a little quicker than was expected as the hospital found some concerns. Sharon was private and we respect this. In any case, in recent weeks Sharon has had to go through some intense treatment and that sadly failed. During that time, Sharon being Sharon, still did the wages, kept all the contracts up to date, sorting out the BGF and the foundations accounts and more and basically continued running the company she loved. She could not be stopped.
Today we lost Sharon, the physical being, but we will never forget her and we will tell Marsha of our experiences with her mother, as others will. Marsha will know her mum as we have known her, an absolute gem of a human being.
“Here’s to you Sharon, you made me a better person and will always be in my thoughts as will Marsha” – David
MaidSafe will of course look after Marc and Marsha.
On to the update - (Sharon would want us to just get launched, no-nonsense, of course)
Pushing as much work as possible onto the client - the device that’s requesting uploads, data reads and other services – makes a lot of sense from the point of view of the network. The client, with its CPU and RAM located on the same device, can run certain tasks far more efficiently than a network of distributed nodes can. More than that though, delegating work to the client helps us make Safe a network of real life.
While some of us spend most of our lives online, this is not true of the majority of humanity for whom, even if they weren’t busy doing other things, connectivity cannot be relied upon. The move to DBCs and CRDTs allows us to kill two birds with one stone. It benefits the network by letting us cut down on messaging and expensive consensus algorithms while at the same time improving the user experience by blurring the boundaries between online and offline. The guarantee of eventual delivery means you don’t have to be always online to benefit.
General progress
Bit of a scratched record this one, but real progress is (still) being made on moving chunk handling duties away from nodes and into routing, improving section split behaviour and tightening up liveness detection - how we can tell if a node is performing as it should (or Feat-ChunksToRouting as it’s known to the team). Doors aren’t open quite yet, but DJ @joshuef is on the decks and it should be party time soon
Many thanks to @bzee for spotting and fixing a tracing bug in the logging/self-update/yokio crate interaction and updating our config to get around it.
@danda and @anselme have been looking at bulletproofs ( the Rust range proofs library), single use keys, and how currencies like Zcash and Monero manage unlinkable transactions. There’s a lot we can learn, and hopefully a lot we can improve upon too, or at least make simpler.
@oetyng has been adding a first basic level of message prioritisation, which is essential to lower the impact of spam. He’s also been tweaking the data self-encryption algo and initial tests show 460% faster writes on a 6-core machine. (The most significant part of the improvement is due to using more cores, so the number available would also determine percentage improvement.) These sorts of optimisations will make a huge difference once the network is up and running.
The task of bug hunting is made much easier by graphical tools, and Team Chris (@chriso and @chris.connelly) have now implemented the [color eyre](https://github.com/yaahc/color-eyre)
error handler tool in sn_api
and safe_network
repos.
Chunk batching and pre-pay
This one generated some discussion after last week’s update (thanks for the feedback), so it’s probably worth digging a bit deeper. What we’re talking about here is paying for uploads. To do so the client must do the work in aggregating all the agreements and once it has approval it can pre-pay.
The primary function is to improve performance, paying for all chunks that make up a file at once is far more efficient that paying for each chunk with a DBC individually. Also, it makes encryption and local storage a core part of the uploading process, so chunks can then be uploaded any time. This means the software can be used with only sporadic connectivity to the network. Encrypting to disk already done, payment is simple and quick, and uploading can be done even a chunk at a time, if network connectivity/bandwidth is really crap.
On the UX side, the client can get a quote for their data upload then pay when they want by simply presenting the guaranteed quote. ‘Book now pay later’ is a way to make the process smoother. There are no retries as the price changes. You have your price and even if network connectivity is flaky (as it is for most people around the world), you won’t have trouble proceeding.
The process goes like this. You self-encrypt your data and store the chunks locally then ask the network for a quote. Based on the number of chunks, you receive a quote that you can accept or reject. If you accept it you can pay at any time. The quote is specifically tied to your data and is guaranteed.
This assumes the SNT price and the network will not be massively affected by a large upload event, meaning the network is large and capable of expanding. To prevent spamming by persistent malicious quote queries, messages from a client are ranked lower than node and infrastructure messages, i.e. the stability of the network is prioritised.
Otherwise it has been noted that the community would like a bit more of an update regarding the network economy in general. We’ll hopefully get something more extensive on that topic in the coming weeks!