Update 27th November, 2025

Weekly Developer Update: November, 27, 2025

This week the team has kicked off the mission briefs, and continues to make good progress towards the end of year goals, updates related to these (based on topic/focus area), can be found below:

Core Network (Performance Improvement)

  • As mentioned in last week’s update, this week will see the deployment of another release. The focus of this is to resolve the replication issue which was a bug contained in a previous release, preventing the full replication of chunks as intended. The second aspect included in this release addresses the previously mentioned replication range - this will be expanded to improve data availability across the network.

  • Also included in today’s the release is an update to maximum stream data size. This has now been reduced to 1MB for both client and node connections to improve the speed of network data in all directions.

  • While the above are the most significant updates of the week, we have also included a link to the release notes, for those that would like to follow some more of the deeper technical aspects that have been worked on this week (Client::get_closest_to_address now accepts an optional count parameter to specify the number of peers to retrieve, for example).

Node Running (Data Hosting)

  • The auto upgrade node code went into testing this week, successfully passing the first round without issue, we will now move onto a large test-net to ensure stability and functionality over the coming days. Should this go well, then the intention is to release the auto node upgrade next week. This will make Linux user’s lives infinitely easier when it comes to running their nodes and supporting the network.

  • To repeat last week’s comments re: Windows, once the Linux build is successfully deployed attention will then be turned to other operating systems, as well as node running tools.

Merkle Tree (Data Upload Payments)

  • As mentioned last week we are beginning to test the integrity and structure of the payment design, specifically we’re now working on smart contracts and integrating these into the test Merkle code - once this is complete we will move from the synthetic network setup to testing in a reflective environment (real token).

  • In line with the above we have also begun creating a comprehensive testing strategy to verify node stability and payment mechanism reliability.

Indelible (Organisational Tool for Data Uploads)

  • We have now got API tests that hit 90% coverage of the functionality that APIs offer. UI test coverage has also begun.

  • Work has also begun on the configuration API, this will enable an organisation using Indelible to control and monitor all their settings and setup options directly via API. (Basically this could allow them to run Indelible in a headless fashion).

Dave (Prototype Product for Development Updates)

  • We have identified a UX problem, that makes it appear that a user needs to pay with ETH when setting up their account for paymaster - they do not. We will work on modifying how this is presented.

  • There is currently no update on the upload bug that was reported last week, we have failed to recreate this and while it now appears to be an isolated incident (with no further issue of this nature reported) we will continue to investigate and monitor, sharing our findings (and actions) when able.

Mobile Bindings (Mobile Application Building)

  • Work continues on data streaming, archives and vaults (file directory upload and download) - this week we have the pleasure of including a demo, that shows upload from mobile and download on desktop!

35 Likes

1st for the first time

11 Likes

Cool stuff, that I’m first, too.

EDIT: Uhuh :weary_face::sweat_smile:

Promising as usually. Where is the demo included?

5 Likes

Tech being tech, try now

7 Likes

Thanks to all involved for the update.
We’ll be looking carefully at the new release and hopefully we can get back on trak shortly thereafter.

7 Likes

Thanks. Very cool to see.

5 Likes

12 Likes

Is the turkey eating the turkey?:grinning_face_with_smiling_eyes:

9 Likes

How does this relate to max record size? Has that been reduced?

Is this built into the node itself or is it a part of antctl/launchpad?

If in the node itself then is there an overview of how it is doing this?

Could it be time related, as in when it is done in relation to significant events in the network, like for instance large number of nodes being reset at once, or maybe an outage due to routing issues in the internet or other similar thing. won’t mention the leech

11 Likes

Thanks for the update team.

Is this auto-upgrade optional or mandatory? I suppose nothing is mandatory in light of the fact that code can be modified, but just curious how this is being developed.

Cheers :beers:

2 Likes

Will auto update be at the antnode executable level or at one of the wrappers (antctl etc)?

I suspect large node farms will be interacting directly with antnode. So, I wonder how that will be impacted?

1 Like

Good question! I was wondering the same or what the relationship there is with chunk size?

I’m assuming this impacts all data serving too and not anything specific to streaming (from the client)?

2 Likes

Questions so far:
Max record size is not covered here, only stream size.

Auto upgrade is part of node, and not part of external tools/wrappers (like LP or antctl).

This is part of node code, so it isnt optional.

7 Likes

Oh yeah!!!.. this is the same turkey that Trump pardoned (at the moment when he said he would eat another turkey instead)…. :thinking:

The live turkey is very happy!

1 Like

Thx 4 the update Maidsafe devs and all your hard work

Really like the mobile upload

I just used Google’s Antigravity to install 2 nodes




All you need is where to download the node: Autonomi Nodes
Eth address
Tell it how many nodes to run :ant: :robot: done :wink:

:robot: can also auto-update by checking releases :face_blowing_a_kiss: in the browser :sweat_smile:

Keep hacking super ants

5 Likes

Thanks so much to the entire Autonomi team and community for all of your hard work! :flexed_biceps:

Our team and community are totally the best. No one else is even close. :1st_place_medal:

1 Like

Would be good to know the difference. It would seem if I stream something I grab the underlying data which for a video is 4MB chunks. So what is the difference between stream size and why does it make any difference when I am grabbing off the network the underlying data at 4MB chunks?

Your answer really needs context to make it understandable. What you said was really just repeating what was in the update and only adding in clear terms max chunk size is not being talked about, which the update also did not talk about. Just a bit more of info and seem you are teasing us wanting us to ask about the real answer

That was one question I asked early on and it was said those running the node would not be affected since they run the node s/w directly. So this is a change.

5 Likes

Time to start bundling a complete run nodes/upload/download files solution with each update that just works? Would be simpler than what we have now?

3 Likes