Update 13th November, 2025

Over the past week, the team has been focused on data integrity, repair strategies, node security, and upgrade mechanisms (leading to action items such as parallel scanning and script modifications). We have also made steady progress across client integrations and application enhancements. Below is a high-level overview:

Summary of Development
Qi - Multiple updates and new features
Roland - Replication and analysis tools
Anselme - CLI and payment integrations
Ermine - Analytics and wallet features
Victor - UI and testing
Chris - Continuity services and upgrades
Benno - Mobile bindings and demo

Core Network (Performance Improvement)

  • Conducted tests and investigations into replication processes, confirming resolutions for identified issues without regressions.

  • Analysing and addressing the data issues created by churn. This process will be concluded early next week. Scanning so far shows that there is data loss of 0.4% (*) on the network.

  • Set up additional services for targeted and broad network scanning, incorporating feedback for efficiency improvements.

Node Running (Data Hosting)

  • Researched and implemented initial features for automatic node upgrades and self-restart capabilities, with CI tests across major OS platforms.

  • Raised PRs (#3297, #3298, #3299, #3300, #3301) for proof-of-concepts related to node storage checks, re-uploads, fallback replication during churn, and blind scanning.

  • Updated existing PRs (#3282) to enhance trust mechanisms and integrate related changes.

  • Discussed strategies for gradual upgrades to minimize network disruption.

Merkle Tree (Data Upload Payments)

  • Advanced work on integrating low-level networking with higher-level APIs for core flows like payments and uploads.

  • Achieved end-to-end functionality in a local environment using mocked components, including successful directory uploads with consolidated payments.

  • Polished CLI features, incorporating pricing data and exploring smart contract interfaces for enhanced payment methods.

  • Raised a PR (#3302) for CLI updates built on client integration work.

Indelible (Organisational Tool for Data Uploads)

  • Added analytics dashboards for upload success/failure tracking.

  • Improved frontend UI for better user experience, such as preserving file selections and cost estimates across tabs.

  • Completed and tested wallet management features, with multiple PRs raised and reviewed.

  • Addressed minor bugs and integration issues, with thorough testing to confirm functionality.

Dave (Prototype Product for Development Updates)

  • Following the latest deployment of Dave last week, the team are aware of a potential payment and upload issue and kindly ask if community members have experienced this to please share your logs from Dave upload in the Discord General-Support channel :folded_hands:

Mobile Bindings (Mobile Application Building)

  • Some of the client api functions have been completed, with individual chunks being able to be written and retrieved.

  • A demo is expected next week, showing files and some of the base data types being both uploaded and downloaded.

(*) Correction : This was incorrectly added as a percentage, it was in fact a decimal representation of a percentage. The original value shown was 0.004%, the correct value is 0.4%

36 Likes

Ffffffiiirrrssstttthhh!!!

8 Likes

Second :2nd_place_medal:

We need to update fore the release?

7 Likes

my copy-paste skills were found lacking….

9 Likes

third fourth :smiley:

new update format =)

10 Likes

Great update, especially the news about the high stability and durability of stored data! :ok_hand:

@Nic_Dorman, welcome to the Autonomi community, great debut! :grinning_face::+1:
Many thanks to the team and the community for their relentless pace and hard work, as well as their quick response to detected abuses of emission acquisition at the expense of network destabilisation. :clap: :clap: :clap:

Great job! :victory_hand: We’re on track :rocket:

8 Likes

I’m very excited to hear that you have developed a way to get metrics for this! Good job!

No data in any system is 100% secure. There is data corruption in hard drives and what not. With measurements we can actually see how Autonomi compares with other systems. I think that eventually we have a chance to be compared quite favorably.

14 Likes

Is that for all data or maidsafes uploads? Also the answer is probably going to be way over my head but how can this be worked out?

6 Likes

Excellent update - love the new format and a big welcome to @Nic_Dorman .

I know you have been around a while but this is your first post, so congrats.

Thanks as ever to all who contributed, namechecked or otherwise. Some of the team just get on with it , quietly supporting those who do get the namechecks.

Like @Toivo I’m keen to see how 0.004% data loss (if indeed it comes out at that figure when its all scanned), how that stacks up against potential competitors.
Any data loss is deeply regrettable, but no practical system can or should guarantee 100% reliability. We can make sure we have a significant number of nines though… How significant? Discuss…

Looking forward to trying out the new release(es) that will implement this latest work.

And while I have your attention - lets hear it for parallel networks where we can do more testing and compare what works and what does not. And if these networks in total have to be smaller then so be it- at least we will have a chance to be more honest about what the capacity of the network is - and how it can change dynamically.

Thanks again to all – AVANTI!!!

11 Likes

Congrats to the team :ok_hand:t2::ok_hand:t2::ok_hand:t2:

Please can you clarify one thing:

There was a lor of talk about data loss, like real big chunks. But now i am hearing 0.004%. Where is this difference coming from? Was it a complete exaggeration?

11 Likes

Thx 4 the update Maidsafe devs and all your hard work

What a great update, keep up the good work

Welcome @Nic_Dorman

@Traktion :star_struck:

Would be fun if Dave or Indelible could communicate the wallet address to the launcher, so that the user doesn’t have to add a wallet address anymore.

“Is Not An Option” when it comes to data, was the idea not to have 8 copies? :sweat_smile: Problem is also time, not every enterprise has the time to upload a googolbytes data and… Sorry for not reporting my chunk not found, but thought that after a few churns it would reappear. Don’t mean to sound harsh btw

Nice update format

Keep hacking super ants

5 Likes

Correction:

There was an incorrect value shown for the network data loss scan value.
This was supposed to be converted to percentage, but shown as decimal.

The original value was 0.004%, the correct value is 0.4%

11 Likes

Hey @Nic_Dorman is that 0.4% of chunks?

If a file misses a single chunk that file can’t be retrieved, right?

If that’s correct the amount of data that is not able to be retrieved is a more significant number, counting lost chunks + irretrievable data.

Trying to wrap my head around it because it seems anyone I speak to has problems.

12 Likes

Hi Nic!

Posted in MEET DAVE about some issues earlier this week.

Actually just used Dave to upload the logs from C:\Users\jordg\AppData\Roaming\autonomi\dave\data

c0858b3294b02abf6f5ca26edcdbb3ebe2d5b82169028c6cdf3e8fdc9ff0f0c8

2 Likes

If, after correction, the data loss is 0.4%, which, as I understand it, is a loss of 0.4% of all files stored on the network, and not chunks, then this data loss result could still be considered very good, given the size and operating time of the network.

However, it all depends on what exactly it means. : “Scanning so far shows that there is data loss” - @Nic_Dorman Can you specify what percentage of the network has been scanned so far?

2 Likes

So I gather this is 0.4% of records lost, 1 in 250 records lost.

How does this translate to files lost. If a file has 100 records/chunks in it then that is a 1 in 2.5 chance it is broken. IE a 40% chance. Or is this analysis on files lost, that is 4 in 1000 files lost?

Very important because if I have a file that is over 1GB then do I expect there is a fair chance it is broken? Or is it a 0.4% chance?

Using the formula to find out how much potential loss there if a certain %age of the network goes down at once assuming replication of 5 is %age ^ 5. This results in 1/3rd of the network went down without replication having time to occur. Its more of the network if records are on more than 5 nodes.

0.33333 ^ 5 = 0.0041 for at least 5 nodes holding each record and
0.4 ^ 6 = 0.0041 for at least 6 nodes holding each record

That is a huge effective portion of the network to go down in a very short time frame

12 Likes

Curious if there are any updates or morsels regarding Soarsa or Communitas, @dirvine @JimCollinson?

Great job @maidsafe team!

4 Likes

Saorsa and Communitas are not Autonomi/Maidsafe projects so we should not expect updates here.

Github says plenty though :slight_smile:

too much to list for Saorsa

10 Likes

The fact that David is building this speaks volumes.

3 Likes