Update 8 June, 2023

All around the world, thousands of nodes are still humming away, carefully and unquestioningly storing and replicating the community’s cockerel pics and unfathomable music choices for ever, and ever, and ever. Don’t worry, we’re going to take this testnet down in due course. We’re not there just yet but as time goes on the network is becoming noticeably more stable, predictable and scalable - which is extremely gratifying. Please do carry on uploading and sharing, and post your logs if you see anything untoward.

Under the hood

Some conclusive test results have convinced us to switch our data replication model from push to pull. Under the push model that we’ve been using up to now, when there is churn (a node goes offline) nodes with copies of the data held by the dead node push them to the next nearest node. Some of those nodes will be successful in pushing their chunks, others won’t, but there should be enough tolerance in the system (as controlled by the choice of k-value) to ensure no data is lost.

The pull method is similar, except the new node, realising it is the heir to the dead node, requests chunks from its close group. Again, some requests may be successful, others won’t be, but in the end all chunks should be received.

So what’s the difference? It’s in the number of messages being sent and the number of extra copies that are necessary to make the push model work. @Qi_ma has been running tests under extreme churn conditions and discovered that pull is superior on every metric - messages, CPU, memory and success rate. Therefore switching was a no-brainer. As of now we’re using the pull model.

It’s much better, but we are still seeing occasional dropped messages, which is why success isn’t 100% just yet.

Elsewhere, DBCs are now being replicated, laying the ground for paying for data, which will be the topic of a future testnet. @bochaco has created an illustration of Merkle DAGs and how they build a tree for the chunks that lets us use the root of the tree as the storage payment proof, which we will reproduce here.

The storage payment proofs are generated using a binary Merkle tree:
A Merkle tree, also known as hash tree, is a data structure used for data verification
and synchronisation. It is a tree data structure where each non-leaf node is a hash of
its child nodes. All the leaf nodes are at the same depth and are as far left as
possible. It maintains data integrity and uses hash functions for this purpose.

In Safe, in order to pay the network for data storage, all files are first self-encrypted
obtaining all the chunks the user needs to pay for before uploading them. A binary Merkle
tree is created using all these chunks' addresses/Xornames, each leaf in the Merkle tree
holds the value obtained from hashing each of the Chunk's Xorname/address.

The following tree depicts how two files A and B, with two chunks each, would be used
to build the Merkle tree:

                                       [ Root ]
                                  hash(Node0 + Node1)
                     |                                           |
                 [ Node0 ]                                   [ Node1 ]
            hash(Leaf0 + Leaf1)                         hash(Leaf2 + Leaf3)
                     ^                                           ^
                     |                                           |
         *----------------------*                    *----------------------*
         |                      |                    |                      |
     [ Leaf0 ]              [ Leaf1 ]            [ Leaf2 ]              [ Leaf3 ]
 hash(ChunkA_0.addr)    hash(ChunkA_1.addr)  hash(ChunkB_0.addr)    hash(ChunkB_1.addr)

         ^                      ^                    ^                      ^
         ^                      ^                    ^                      ^
         |                      |                    |                      |
    [ ChunkA_0 ]           [ ChunkA_1 ]         [ ChunkB_0 ]           [ ChunkB_1 ]
         ^                      ^                    ^                      ^
         |                      |                    |                      |
         *----------------------*                    *----------------------*
                     |                                           |
              self-encryption                             self-encryption
                     |                                           |
                 [ FileA ]                                   [ FileB ]

The user links the payment made to the storing nodes by setting the Merkle tree root value
as the 'Dbc::reason_hash' value. Thanks to the properties of the Merkle tree, the user can
then provide the output DBC and audit trail for each of the Chunks being paid with the same
DBC and tree, to the storage nodes upon uploading the Chunks for storing them on the network.

@Anselme has been working through some bugs in the faucet and wallet, which are currently not marking DBCs as spent on the network. He’s also solved some logging issues.

@Roland has raised a PR to collect system, network and process metrics. This is behind a feature gate, meaning it can be toggled on and off according to our needs. It should be useful during testnets to collect metrics from the community. Later we will enable OpenSearch metrics, but this is a simple solution for now.

@ChrisO has got an automated testnet release process working experimentally. Should be ready for live action very soon. He and @aed900 have also been working on other aspects of testnets including secrets management on Digital Ocean. Angus has also been looking at an issue where private nodes (behind NAT) get written into the routing table, which is not what we want.

Fortunately @bzee, who has been digging into libp2p, believes an upcoming version should solve this issue for us.

Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!


Hey first again, my lucky month

Better be cat pics or my faith in the cat pics/videos taking over the internet is squashed


Silver thanks to an early wake!


Update at this time of the day, you and the team were in cahoots for that win :trophy: :laughing:



Simple idea - take a bow!


Thrilling progress! It’s neat to see how tests are being done and many metrics are collected to give overall picture how the Safe Network is progressing.


6th! :rofl:

Is a Merkle tree not shaky and trembling as the chancellor of Germany?


as a guy from Amsterdam i can relate to hash tree though :laughing:


Thanks so much to the entire Maidsafe team for all of your hard work! This new testnet gives us all hope! :horse_racing:

What is this building in the top photo? I see you found some real estate that is in my price range. :derelict_house:


Thanks for the update Team. Things are coming together nicely. Increment, test, repeat.

Keep the energy high - don’t forget to go for walks, enjoy the summer up North while you can and breath!

Cheers :beers:


The stable net is coming, fast now! Thanks so much to all who build and participate, and never give up, who stay positive, and grateful: the secret to vitality in old age.


Thx 4 the update Maidsafe devs and all your hard work

A bit out of words how rapidly everything is going. :exploding_head:
Incredible that you can just change the model and see such improvements

Keep hacking super ants


So, rather than…





Nice discovery!

Knock and the door will be opened, seek and yea shall find, request chunks and be safe… :sweat_smile:


And talk to me again! :grin:

Next testnet will be DBCNet?! :laughing:


Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:

Privacy. Security. Freedom