Update 02 November, 2023

Well, it was bound to happen some time. After a run of successes, the RoyaltiesPaymentNet was cursed by high memory usage which killed off many nodes before they could even start, and left the rest pretty much zombified. The spooky thing is it all worked fine on our internal testnets (albeit with some slightly raised memory levels). Could poor RoyaltiesPaymentNet have been struck down by dark forces beyond our ken? :ghost:

Or perhaps there is a logical explanation. Chief in our sights is GossipSub, the system by which nodes performing transactions propagate the fact to foundation nodes which then take their share. GossipSub is dealing with many more messages than anticipated. It’s not yet clear if that’s looping, or client top-ups resending royalty payments, or something else.

One issue is that all nodes try to decode all transfers, causing a lot of unnecessary activity, another is that libp2p has been allocating quite generously… We’ve some PRs in to help there and are hopeful this will yet come together!

There are some other fixes to go in too, including libp2p fixes, encrypted transfers, and replication on put changes which should reduce load when we launch another testnet.

We’re grateful that the libp2p team is responsive and open to helping us. This week @dirvine contacted them about building in Sybil defences based on some recent research, and they’ve said they’re open to the idea.

General progress

@roland has been looking into chunk splitting and the payments process, and also added a new feature to the CLI that ensures the user has enough balance before executing an action like an upload.

@chriso worked on the node management side of things. Windows is always more difficult in this regard and he ran into some issues, but it’s mostly sorted now.

@joshuef investigated high memory usage and looping messages in GossipSub which may have caused the testnet failure, as well as other small fixes, and is looking to implement pay one node which should speed up the validation process and improve performance.

@bochaco created a PR to refactor the transfer validation to make it more efficient, and has also been the main driver of implementing encrypted royalties transfers. Tests are now working.

We’ve been experiencing a few payment failures in testing as we move to only paying one node. @anselme is digging into those, and working to make the issue easier to debug.

@qi_ma has been fixing some other internal tests that were failing.

And @bzee has also been working on pay one node, while additionally offering up some improvements to the API query Kad workflows.

Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!


Jdjdhdjdjdjdnd dndn hhdhdh hejejsje :smiley:


Second!! Yay!!


Third!!! :ghost::ghost::ghost::jack_o_lantern::jack_o_lantern::jack_o_lantern::jack_o_lantern:


Thx 4 the update Maidsafe devs

Dear devs,

Please let’s not forget all the hard work that you put into this week after week. Some testnets will be shortlived, especially around halloween.



Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

And also to all of the volunteer community testers! :horse_racing:

@19eddyjohn75 is totally right. The testnet was affected by Halloween forces. No need to worry.


Can anyone give a quick explanation of this.

Is this where the client pays one node and then that node distributes a portion to others? Or maybe the client routes all the separate payments through one node? Or maybe just one node gets paid for the upload? Or ???


If data is stored/comes from a node in the CLOSE_GROUP then it’s considered valid (no need to ask majority).

With this in mind, we are testing paying a single node in the group to instigate the replication process for such a chunk. What we are looking at is knowing it’s replicated (or started) before we consider the chunk stored.



This raises the question of who gets paid (apart from royalty portion). Would just one node or somehow all the other nodes too?


Excellent efforts as always team. The holiday’s will be upon us before we know it and then you can all take a well deserved good long rest.

Ant’s inspecting memory bug:

My understanding is that the nodes that receive NEW chunks get paid. Those chunks are then replicated, but the receivers of replicated chunks do not get paid. So to earn you have to receive new chunks.


For each chunk just the one node. As chunks appear all nodes will be paid as this is a random thing. However, clients will unlikely pay the highest fees, so if they are quoted a large fee they will ask other close nodes and get the cheapest.


Is the node with the high price, that belongs to the CLOSE_GROUP, still obliged to keep the chunk when replicating?


Ah so paying the one node is just a test for replication and not the way it’ll normally be done.

So the client (in live network) will still be expected to upload to each node with reasonable prices thus paying them too.

I was told yes it will


I don’t think so.

Sounds a little like the early pay on GET ‘lottery’ where only the first to provide the chunk was paid. (:thinking: actually, not sure about that, but in GET a node was given a chance to ‘win’ one coin, by being given ownership of a random coin address. Anyway that’s all history now.)

Paying one node for PUT sounds neat:

  • simpler
  • much more efficient
  • rewards competitive pricing and discourages attempts to game price

All nodes have to store the chunk, but on average if all nodes behave well in terms of price, all will be rewarded approximately the same. The effect will average out even more if you are running multiple nodes.

May provide an incentive for clients to try and game the price but I doubt this is a big risk. :thinking:


Yes this is it but for Put

This is our belief. As chunks are spread in a way that is not predictable then it’s likely the amount of work to get one free/cheap chunk would be prohibitive. This is because they need a partner/pal/thief who is in each close group.


At first glance, this system tends to have a lower price since we can assume that clients will choose the cheapest node in the close group. The rest of the nodes will have to adapt if they want to make a profit.


Was the main motivation here to reduce transactions or does this better deal with high priced outliers?


There’s a risk that clients load the system with price requests in order to find the cheeper nodes, but this will be balanced by the downsides.

Take the first price: faster, possibly a bit more expensive but overall probably not much over a large number of uploads.

Spend several times as long checking prices on every chunk before upload: takes a bit longer and may save a little in cost, but probably not much over a large number of uploads.

To test this out we are going to need some custom clients!


For small uploaders a higher speed could compensate a more expensive price but there could be large data uploaders where a lower price could compensate a lower speed.

These large uploaders could pull the rest of the nodes to lower prices because otherwise they would be left without a prize in an important percentage of the PUTs. If they do not do so, it would be the cheapest nodes that would gain the most while the most expensive ones would still be forced to keep those PUTs.