Latest Release September 3, 2025

:loudspeaker: Announcement: Latest Release :loudspeaker:
Please follow these instructions to upgrade your nodes to the newest version to ensure the best performance and stability for everyone! Check out the Github Release with changelog details. We are now testing the Dave release with a small group from the community to iron out any wrinkles and so we can prepare documentation and support materials ready for an official community release next week.

For Node Launchpad Users:

  1. Open Node Launchpad v0.5.10 if you are not already on that version
  2. Press O to access the Options screen.
  3. Then, press Ctrl + U, and hit Enter. This will upgrade your nodes. Upgrading can take several minutes for each node. Please don’t close the app or stop your nodes during this process.
  4. Your nodes will now stop
  5. Press Ctrl + S to start your nodes again

For CLI Tool Users:

  1. If you’re using the CLI tool, please update and upgrade. Run the update first: antup update
  2. Then run the upgrade: antctl upgrade --interval 60000
  3. Finally, start them again with antctl start --interval 60000

For ALL Users:

  • Please start your nodes gradually — especially if you plan on running multiple nodes.
  • Be conservative with CPU allocation to maintain stability
  • You must be on 2025.7.1.3 or newer to earn rewards

Binary Versions:

  • antnode: v0.4.4
  • antctld: v0.13.3
  • antctl: v0.13.3
  • ant: v0.4.6
  • evm-testnet: v0.1.16
  • nat-detection: v0.2.22
  • node-launchpad: v0.5.10
21 Likes

Quick clarification on the recent ant release (v0.4.6): There is a breaking change affecting file compatibility between versions. If a file is uploaded using the new version of ant, it cannot be downloaded using an older version. New ant can download older versions. This is due to a change in the underlying datamap format. This doesn’t affect all builders, but mainly those interacting directly with self_encryption, such as anttp. If you’re using autonomi, a dependency update should be sufficient.

6 Likes

I posted this on the IF discord channel, but reposting here so everyone can see. @rusty.spork can you get a brave MaidSafe dev to answer these somewhere?:

I have some questions about this new release, in particular this tidbit:

The streaming capability results in a new datamap format that requires four extra chunks. If there is an attempt to re-upload files uploaded before the streaming implementation, there will be a cost for these extra chunks. The new datamap format always returns a root datamap that points to three chunks. These three extra chunks will now be paid for in uploads.

So has the minimum size of a self encrypted file grown from 12MB to 28MB?? (3 chunks originally + 4 extra chunks)

Also, is there a way to stream out a portion of a file, like what @Traktion is doing in tarchive?

Any example code of streaming downloads/uploads? In particular, is there a way to query a running download/upload operation to get the completion status or do we need to build this ourselves? Maybe Dave is doing this under the hood and we can just use that implementation?

And a +1 to the public scratchpads being added to the standard API that @riddim brought up, everyone is using these.

12 Likes

I will ask Qi to come back on this.

This is not in this release, but Anselme is working on it now.

9 Likes

4MB is the maximum of chunk size, but their minimum is… I don’t know but very, very small.

3 Likes

Agreed, my intention isn’t clear in my question. Basically I’m trying to figure out the minimum size of a file (or collection of files) to minimize ETH gas fees. Because as I understand it, a chunk costs the same whether it has 1 byte of data or 4MB of data. It used to be 12MB, now its 28MB, so it becomes even more important to pack small files to keep costs down.

5 Likes

The minimum size of a a content that self_encryption can handle is always 3 Bytes.

I don’t kown where the number of 12MB comes from.

3 Likes

3x4mb - that’s what you pay for - no matter if 3 byte of 12mb

2 Likes

Sounds like maidsafe added self encrypted metadata as extra blob to every upload now to enable streaming.
Traktions implementation of streaming does work without any extra chunks - was this looked at..? Especially for small files that overhead resulting in 7 chunks even for the smallest public files looks not desirable…

2 Likes

such datamap chunks (as called for convenience to be differed as normal chunks of content) are normally being very small, and will slowly stepped growing based on the file size, from around 100 Bytes each upto 4MB, with a step of around 30 Bytes per 4MB larger of the file zie.

1 Like

Oooooh - the issue was files being really huge and having so many chunks that the data map was more than 4mb in size oO

Interesting… >500gb files

2 Likes

it’s not metadata to be added, just ensure the returned datamap always points to 3, which could be 3 content chunks for smal sized file, or datamap chunks of a root datamap for large sized file.

This eliminates an upper layer of datamap recursive to support super large sized datamap, or hassle of private archive handling large sized datamap, i.e. archive data now can be mostly small and almost fixed sized.

7 Likes

I re-uploaded a file that was uploaded before with an earlier version. This new file cannot be downloaded. I have experienced this with three different files. Old addresses work for downloading, but I don’t have that for this one.

Bigger files (around 200MB) always fail at chunk 9.

Terminal output below, can post logs tomorrow.

(Heading to sleep, won’t comment more today.)

toivo@toivo-HP-ProBook-450-G5:~$ time ant file download ec3c02edd203c394f1b41148f9648755f9886533ab73ecf989ceb575b28806c5 .
Logging to directory: "/home/toivo/.local/share/autonomi/client/logs/log_2025-09-04_23-35-44"
Connecting to the Autonomi network...
Connected to the network
Input supplied was a public address
Using lazy chunk fetching for 3 of datamap DataMap:
    child: 1
        ChunkInfo { index: 0, dst_hash: 7d34f0..f42b62, src_hash: be9bd6..aff9a8, src_size: 2936 }
        ChunkInfo { index: 1, dst_hash: 761d70..d37873, src_hash: 3daf89..68362f, src_size: 2936 }
        ChunkInfo { index: 2, dst_hash: 5e6fd7..948563, src_hash: 400dd6..b0b410, src_size: 2938 }
Successfully fetched chunk at: 7d34f0f485c1efddbc7babc8faa65dfb8e5484d23cc227b1b671fed654f42b62
Successfully fetched chunk at: 761d705101a59aa2c2449afaa1aee6e4ed34135173a284e1bd2974fc0fd37873
Successfully fetched chunk at: 5e6fd7f9a016c27948d4b860a359e85f56eb6415ca2d31171c119da6f5948563
Successfully processed datamap with lazy chunk fetching
Detected large file at: ec3c02edd203c394f1b41148f9648755f9886533ab73ecf989ceb575b28806c5, downloading via streaming
Using lazy chunk fetching for 3 of datamap DataMap:
    child: 1
        ChunkInfo { index: 0, dst_hash: 7d34f0..f42b62, src_hash: be9bd6..aff9a8, src_size: 2936 }
        ChunkInfo { index: 1, dst_hash: 761d70..d37873, src_hash: 3daf89..68362f, src_size: 2936 }
        ChunkInfo { index: 2, dst_hash: 5e6fd7..948563, src_hash: 400dd6..b0b410, src_size: 2938 }
Successfully fetched chunk at: 7d34f0f485c1efddbc7babc8faa65dfb8e5484d23cc227b1b671fed654f42b62
Successfully fetched chunk at: 761d705101a59aa2c2449afaa1aee6e4ed34135173a284e1bd2974fc0fd37873
Successfully fetched chunk at: 5e6fd7f9a016c27948d4b860a359e85f56eb6415ca2d31171c119da6f5948563
Successfully processed datamap with lazy chunk fetching
Streaming fetching 110 chunks to "." ...
Fetching chunk 0/110 ...
Fetching chunk 1/110 ...
Fetching chunk 2/110 ...
Fetching chunk 1/110 [DONE]
Fetching chunk 3/110 ...
Fetching chunk 3/110 [DONE]
Fetching chunk 4/110 ...
Fetching chunk 4/110 [DONE]
Fetching chunk 5/110 ...
Fetching chunk 2/110 [DONE]
Fetching chunk 6/110 ...
Fetching chunk 0/110 [DONE]
Fetching chunk 7/110 ...
Fetching chunk 6/110 [DONE]
Fetching chunk 8/110 ...
Fetching chunk 5/110 [DONE]
Fetching chunk 9/110 ...
Fetching chunk 8/110 [DONE]
Fetching chunk 9/110 [DONE]
Fetching chunk 7/110 [DONE]

   0: Failed to fetch data from address
   1: Failed to download file
   2: Failed to decrypt data.

Location:
   ant-cli/src/actions/download.rs:147

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
Successfully downloaded chunks were cached.
Please run the command again to obtain the chunks that were not retrieved and complete the download.

real	0m11.671s
user	0m3.469s
sys	0m1.836s

4 Likes

@qi_ma I’m still confused about the implications of this change. Can you elaborate it more, and also tell us what this means in terms of chunks when uploading a file. Is the minimum, using SE, still three or is it more?

It would be good to have a fuller explanation of what this means in terms of APIs, features etc as well.

Thanks.

3 Likes

the file was uploaded without metadata, means when to be downloaded, it now expects a file name to be provided, so that the streaming download can flush content to disk on fly.

When a folder ( like .) provided, the download fails when try to flush to disk, basically an OS file access error.

I tried to download with:

ant file download ec3c02edd203c394f1b41148f9648755f9886533ab73ecf989ceb575b28806c5 ./temp.1

and the 438 MB file can be fetched properly.

We may have an update later on to make sure the error to be more explicitly shown, or use a random name to flush to avoid the hassle.

as a record, here is the feature request issue When download a file without archive uploaded, file_name must be provided · Issue #3186 · maidsafe/autonomi · GitHub created, so that it doesn’t get ignored.

8 Likes

The min content size to be encrypted using SE is still 3 Bytes

Regarding the generated number of chunks:

  • 3 B <= content_size <= 12 MB : 3 chunks (remains same as previous SE)
  • 12MB < content_size < around TB : X chunks (for content) + 3 extra (compared with previous SE) chunks (for datamap)
  • content_size > around TB : generated chunks remain same as previous SE
6 Likes

The crux of the original question I saw and hasn’t been answered at this point that I can see is

How many records will be needed to store a 1K byte file. 3 for the file itself and how many more?

3 Likes

3 records if private no archive

4 records if public no archive

6 records if private with archive

7 records if public with archive

11 Likes

Thanks! I think this shows the benefit of tarchives (or similar), as you could have, what 12,000 1kb files in a public tarchive and still only need 4 chunks.

4 Likes

Thanks for that. That is clear and understandable

Now after reading something David said elsewhere, I have another question, if I may,

The new datamap, David said self encryption will always return 3 records and a datamap pointing to those 3 records, never more or less. Then recursively till their is only one datamap left.

So is this where the file is broken up into 3 records sections and self encryption encrypts each 3 records + data map, and then 3 datamaps produce a higher level datamap and so on

So a file needing 9 records will result in

  • 3 lots of self-encryption on records 1-3 and 4-6 and 7-9
  • 3 datamaps from those 3
  • 1 final datamap for the 3 intermediate datamaps
  • this pattern followed if larger file

And of course extra records if archive and/or public used

1 Like