Update 13 July, 2023

It’s getting a bit repetitive, but once again this week we can report that the NodeDiscoveryNet testnet is still up and running. A bit rough round the edges and in need of refinement, sure, but the foundations are feeling very solid. This stability is no longer a surprise, but after many years of excitement as we’ve attempted to make this thing fly, frankly this is the type of boredom we can live with.

Among the tweaks resulting from the testnet findings, we are improving the error messaging to users when a node fails to connect properly. Currently, when this happens there are no obvious signs for the user, who has to dig into the logs - although the lack of chunks is a giveaway.

Most connection failures are a result of trying to connect to inaccessible peer addresses. We’ve also seen far more connections that you might expect to valid addresses (given that libp2p offers multiplexing). More than a handful per peer should not exist at any one time, but we’ve seen hundreds! After some digging, this turned out to be a feature (not a bug…) of libp2p, just not one optimised for our use case. @bzee reached out, and Max Inden of Protocol Labs kindly came up with a patch which has seen the number of connections fall from dozens to just six or seven. Thanks Max!

We found that nodes are doing a get_closest check every time a new node is added, whereas they should only do this when they first join, so that’s some more overhead we’ve shaved off. There will be more.

In addition, we’ve been looking deeper into register security, considering what would happen if an attacker instead of trying to change data in a register (virtually impossible without the correct authorisation) just replaced the entire register - not impossible with our current setup. We are working through the best ways to fix this.

General progress

@joshuef has made some tweaks to the replication flow, including one that shuffles data waiting to be replicated/fetched to prevent one end of the close group being hammered due to Xorspace ordering. Along with the excessive connections and over-messaging, this is another probable cause of nodes chucking in the towel.

@Roland has been working on a test for verifying where any particular piece of data is on the network, and @Qi_ma is getting registers into the churn test, so we can see how these cope when things get wild. After that, we’ll be looking at refining our data retention tests, and turning our attention to DBCs.

With that in mind, @bochaco has refactored how the client chunks files during self encryption and pays for their storage. Previously we were chunking files twice (first to create the payment Merkle tree and then when uploading them). We now generate chunks and store them in a local temp folder when paying, and read from that temp folder in batches when uploading the paid chunks. This should reduce client’s memory footprint, especially for large files, as they no longer need to be stored in memory.

@Anselme has upgraded the faucet. The simple standalone file that sat on the local machine, is now an HTTP server that sends tokens to the addresses on the request. So, it’s self-service, and we no longer need to have one person who claims the Genesis key then doles out the tokens manually when people send their keys. That puts us in a good place for when we’re ready to start dishing out tokens to test the DBCs in future testnets.

This will soon find itself added to the testnet tool, which @aed900 has been refactoring together with @Chriso.


Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!

55 Likes

I’ll cheat and get first this week.

27 Likes

Honourable second.

Especially since I read the whole thing before claiming my podium.

Thanks to all for the hard work on this testnet -AND all the work that went into getting us where we are today.

Keep hacking super-ants (c) @19eddyjohn75

22 Likes

Bronze! Well done team!

23 Likes

Still early bird medal! Wohooo

20 Likes

Fantastic update team Ant! Nice to see the bugs hammered out of the libp2p integration. Soon on to DBC’s!

Thanks for all the hard work as usual team.

Cheers :beers: :partying_face:

17 Likes

Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

And it’s great that our testnet is still alive! :horse_racing:

20 Likes

Nothing boring about it to me, great work team, and keep trying to make me bored please!

19 Likes

Not at all boring. Stable and non turbulent is how I like my flights to exciting destinations. :slight_smile:

We don’t have the exciting visuals that SpaceX has, but these stable nets feel like we’ve successfully mastered landing on a barge.

If we had a physical product we would be all over the news. This is amazing progress.

21 Likes

And on it goes!!! Great job!

16 Likes

Thx 4 the update Maidsafe devs

Tweaks and bug fixes is a great place to be in.

Cheers to everybody in this amazing community and beyond :crazy_face:

Keep hacking super ants

14 Likes

Too late. And you can’t sue for copyright.

8 Likes

One has to be absolutely silly, to sue for copyright. :exploding_head:

Everything in our language, art, code, dna is a copy of a copy :crazy_face:

7 Likes

Very true. And who gets to sue successfully is highly dependent on how much money they have. Apple recently tried to sue an apple farm for having an apple on their logo. Fortunately that was a step too far for the courts, but for how long?

9 Likes

AI copies the whole entire internet, after that it dumps 1 million years of scientific discovery on us after 2.5 month.

What egocentric humans want, is sue AI companies for plagiarism, because they are so original. They win, centuries past, what we could have had within 5-10 years humanity now has after 2,5 century. :sweat_smile:

5 Likes

The last time I did any testing was some years before the Rust conversion when there was a massive C repo - ie a lot has changed since I last looked - so, what is the need for the faucet if DBCs are not working yet?

Pardon the current cluelessness . .

5 Likes

They have been kinda working, but the effort has been put in other areas recently.

In order to test DBCs better in future they need to have a more automated system than have one person claim the genesis DBC and manually dole out the tokens to others which is what has been done up to now.

This faucet being done now, I’d hazard a guess at DBCs to be worked on to finalise them in the near future

12 Likes

13th!!! :sunglasses:

Solid progress. Love to see it.

11 Likes

Great progress.

I do have a question though - if a bug is spotted in the libp2p code - in general how responsive are they to get it fixed? Could it potentially sideline progress?

*obviously it depends on how big the bug is, just a general idea

Thanks

11 Likes