Update 07 December, 2023

This week we’ve all hands on deck, filling holes, bashing in nails and replacing rotten timbers in the previous testnet so we can launch the thing of beauty that is the ReduceConnectionsNet, which thus far seems to be sailing along pretty nicely.

So what were those things? Well, we’d gone from requiring three verified spends to requiring five, which increased the verification time. In our small internal testnets nodes were tripping up because of some experimental limits we’d put in place a while ago, which meant they did not have enough network knowledge to perform certain tasks. Long waits for the nodes to act added to the verification slowness, which together caused trouble with the continuous integration (CI) workflow, as tokens from the faucet were stuck in limbo, and so on. In software development sometimes problems fall like dominos, other times they build on each other like barnacles and conspire to slow things right down.

Fortunately, once we’d realised what was going on, scraping the accreted crap from our hull was a simple matter of removing a bit of code here and there.

Part of this was reducing the number of node connections which has yielded a 10x decrease in memory leaks in early tests. We’ve also chopped down replication rates, which had gone a bit haywire, and added some verification that replication messages have come from a close node rather than just anyone, which seems to have calmed things considerably.

Thanks as always to everyone who gives us their time to put the thing through its paces. Special mention this week to @mav for his UX improvements including a more familiar download path and deduplicating cli flags. Also to danieleades who continues to tidy up our occasionally scraggly workflow. Cheers all! :beers:

General progress

@bzee has been digging into the testnet internals integrating sn-node-manager with the testnet deployer.

Similarly engaged has been @chriso who has been working up changes to the sn-node-manager application that will allow updates to testnets on the fly. These include a remove command for – you guessed it – removing individual nodes from testnets.

In a busy week @roland pitched in with a PR to aggregate spends even if the get_record process fails. Previously we were converting all the errors into a single variant which masked this issue. He also raised another PR to improve error handling in the get_spend verification process.

And @qi_ma has been busy bashing out replication errors and connection flooding. The genesis node was becoming overloaded with communications so he balanced the genesis node’s connection workload by replacing the bootstrap node when its K-bucket (record of kad connections) is full. He also added a feature to the networking module that dials back when receiving an identify message from an incoming peer to ensure it is not a false friend hiding behind a NAT.

@bochaco implemented a watch-only wallet feature for monitoring transactions and is looking into the Trezor API to ensure it will work with SNT.

And @anselme has been exercising his considerable systems design braincells on ways that royalty payments, double-spend protection and auditing could be simplified and with the addition of audit DAGs. Still a conceptual work in progress, but you’ll definitely hear more if it works out.

He also investigated and fixed a CLI bug reported by @happybeing about chunk payment errors.

Useful Links

Feel free to reply below with links to translations of this dev update and moderators will add them here:

:russia: Russian ; :germany: German ; :spain: Spanish ; :france: French; :bulgaria: Bulgarian

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the Safe Network together!


first !!!

:wink: when will I get a badge ? after 10 firsts ?

Keep up the great work team !


2nd !!!

Can feel us getting closer and closer to the finish line. Keep it up team!


Great news towards Beta.


Fantastic progress all round. Great to see the Testnet memory consumption falling and stability seemingly back to a solid foundation… definitely feels closer than ever to a fully working Safe Network :smiley:


Its a bit of a rollarcoaster. Well done. I get panic attacks when the testnets blow up. I do realise that its completely necessary and that you learn more from the ones that dont work (like everything in life) but hey, am only human… :+1: :+1: :+1:


You’re not human, you’re Astroman.


Thanks so much to the entire Maidsafe team for all of your hard work! :horse_racing:

Also thanks so much to our testnets volunteers! :horse_racing:


A test net is a ray of hope, do not despair. The Ones that work are great, the ones that don’t work, are even better, because they lead to the ones that work to work even better. Exciting times, beta is coming. Great work all!


Barnacles and hulls? Was it David’s turn to write the update? :grin: or is there another sailor in the house?


12th! Great work :christmas_tree:


As ever thanks to all concerned for the progress reported this week.
Well done and thank you.


Thx 4 the update Maidsafe devs

Happy testing to yall

Keep hacking super ants



p.s. When will CashNotes be renamed to StashNotes or something more accurately descriptive? :sunglasses: :nerd_face:


Thank you for the heavy work team MaidSafe! I add the translations in the first post :dragon:

Privacy. Security. Freedom


Super work team. Keep those testnets rollin’ out. Iterative testing and bug swatting for the win.

Cheers :beers:


cc @JimCollinson @dirvine
I’m sure you guys are already on it but this is similar to what David has been talking about.

My question would be to you guys is, how can app devs that want to help extend the Safe AI, prepare or help? How can we build around that interface? Any guidance to even mentally prepare would be extremely helpful.

The world is ready for personal Safe AI’s and the excitement around web3 and AI is building to extremes, especially amongst crypto communities. Looking forward to hyping up a Safe Network beta release and a new token with a new name!


Brilliant question and well timed. I have been fascinated by many things AI is going to be able to do. I agree with much of the video, but in some ways it’s more than that. For background I have been way deep in genetic coding and neuroevolutino etc. for a while, i.e. mimicking the brain, or in other words probabilistic and simple compute. It’s ants man, it’s ants :wink:

Unclear, random thinking incoming :smiley: :smiley: (early morning, brain dump, no proof read)

Anyway, this is why I hate the massive total order or determinism, it’s plain wrong for so many obvious reasons. However, that has been a debate and I think the debate is over. With current LLM, ANN and watch for liquid networks, the proof points are too numerous to mention. However, AI is being seen as a typically polarised thing today, there’s the oh it’s full AGI already camp and the, this thing makes mistakes, so it’s crap camp. I think they are both wrong. In my mind current AI shows us some amazing things. My head is not clear on this yet, maybe it never will be, but here are some thoughts:

  • Simple models work well, but require scale (many billions of neurons get smart, but only a few thousand are not) A bit like a chimps brain has only 1% less neurons than us, so scale matters, A LOT.
  • Probabilistic compute allows malleable capabilities (determinism is a fixed do this, get that, whereas probabilistic compute is do something I have never seen and we will still get a good output)
  • Data is not knowledge, knowledge is extracted from data
  • We can run these things locally and they are good enough to be the app engine for all our needs (i.e. good enough to become our interface with knowledge, already)

From a developer point of view, then it’s interesting. One of the things that will vanish is the skill gap of the user. For instance my mum, if I ask her to send an email goes into a complete meltdown and panic. Why?. It’s simply the interface. It is the, I can or cannot navigate the apps or command lines etc. part and that changes with LLM already. The apps and command lines probably vanish now.

What I mean I want to send Nigel and email today means the same thing to the computer expert or the computer illiterate grannie. With LLM both parties now have the same “power”, both can instruct the AI to send the email. So AI removes the clutter and provides a malleable interface to the user, whether the user is a top 1% computer engineer or a grannie. This is brilliant and also a major part of the SAFE vision (Everyone).

Another step change is knowledge, so music, files, images, videos etc. I have for many years disliked the files and folders model, apple and microsoft as well as Linux et all have leaned towards search instead of navigation. It’s poor, but perhaps better, but it’s poor. Spotifiy and netflix etc. leaned towards neural networks for finding stuff (recommendations) and also better, but still poor. LLMs though are brilliant at extracting knowledge from data and storing only that, throwing away all the crud. This is the compression looking part of LLMs.

Using retrieval augmentation or fine tuning on your own data is very powerful. There are almost zero hallucinations and you have a new search, a search that is multi modal input against your own data. This is super powerful.

However RAG or fine tune is a step, it’s not (yet) automatic, but that is coming fast and even with the current LLM capability that is not AGI, it won’t matter, these LLMs will replace all apps we can think of now whp, apps won’t exist. It’s the thing we know, the best app just works, it has no configuration or many buttons to press, it just works and the LLM type apps will work for every human in every language for every task that can be done with its data set.

So the data set is vital and there we have global knowledge (i.e. chatgtp, bard etc.) and local knowledge (your own data, fine tuned or whatever). This is where I am right now, it’s not clear how knowledge shares or improves without bias (which are lies or at least unproven facts) and that may be much worse than hallucination.

I won’t go on, this probably is a whole new thread or even more than that. It’s the alignment of humanity to knowledge that is happening, and we need to navigate that and make sure it’s all SAFE and belongs to everyone free of charge.