Thanks to everyone who took part in the DiskNet testnet this week. Despite its ‘rapid unscheduled disassembly’ (© SpaceX), we really did learn some valuable lessons from it, and fortunately the fixes shouldn’t be too tricky. We also found a bug related to logging which has already been sorted, so we’ll be fully ready to go once the next iteration is ready.
Community thanks
Thanks marcelosousa for their PR removing some over the top reviewpad summaries .
Thanks to @mav for his work thus far on improving wallet ux
General progress
Happy to say the memory and CPU spikes we saw in the previous testnet when uploading data seem to be things of the past, thanks to a change in the data republishing code. @joshuef has been running tests on this and the behaviour hasn’t recurred, so fingers crossed that’s that.
@bzee and @aed900 are making progress on AutoNAT - detection of nodes behind home routers/firewalls. They’ve been studying the testnet logs to spot potential issues and work through how AutoNAT might mitigate them.
The other remaining piece of the puzzle is how to store registers. Is the libp2p
way good enough for now, or do we need to come up with a custom solution? The same applies to DBCs, but since there is no CRDT logic involved in that case, these should be much easier. This is what @anselme and @bochaco are looking into at the moment, working through the pros and cons.
@qi_ma is optimising the data republication process. What we really want is that every time there is a churn event in a close group (eight closest nodes, XoR-wise) then the data gets republished to any new data-holders. As well as providing redundancy, the purpose of that is to ensure the routing tables held by nodes are always up-to-date. The libp2p
way is not quite right for us as it is periodic rather than event driven, and can be quite heavy. We’re taking a look at using this as a backstop, in conjunction with more event driven replication.
Qi and @bochaco have also been digging into the connectivity problems experienced during the testnet, which seem to be caused by code panics in the RecordStore module.
Related to this is data republishing on churn, which is a little more complicated with registers. @bochaco has created a new end-to-end test for verifying register data integrity during node churn events.
And @roland is working on improving the logging process in preparation for the next testnet. Hold onto your hats.
Useful Links
Feel free to reply below with links to translations of this dev update and moderators will add them here:
Russian ;
German ;
Spanish ;
French;
Bulgarian
As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!