Announcement: Latest Release
Please follow these instructions to upgrade your nodes to the newest version to ensure the best performance and stability for everyone! Check out the Github Release with changelog details. We are now testing the Dave release with a small group from the community to iron out any wrinkles and so we can prepare documentation and support materials ready for an official community release next week.
Then, press Ctrl + U, and hit Enter. This will upgrade your nodes. Upgrading can take several minutes for each node. Please don’t close the app or stop your nodes during this process.
Your nodes will now stop
Press Ctrl + S to start your nodes again
For CLI Tool Users:
If you’re using the CLI tool, please update and upgrade. Run the update first: antup update
Then run the upgrade: antctl upgrade --interval 60000
Finally, start them again with antctl start --interval 60000
For ALL Users:
Please start your nodes gradually — especially if you plan on running multiple nodes.
Be conservative with CPU allocation to maintain stability
You must be on 2025.7.1.3 or newer to earn rewards
Quick clarification on the recent ant release (v0.4.6): There is a breaking change affecting file compatibility between versions. If a file is uploaded using the new version of ant, it cannot be downloaded using an older version. New ant can download older versions. This is due to a change in the underlying datamap format. This doesn’t affect all builders, but mainly those interacting directly with self_encryption, such as anttp. If you’re using autonomi, a dependency update should be sufficient.
I posted this on the IF discord channel, but reposting here so everyone can see. @rusty.spork can you get a brave MaidSafe dev to answer these somewhere?:
I have some questions about this new release, in particular this tidbit:
The streaming capability results in a new datamap format that requires four extra chunks. If there is an attempt to re-upload files uploaded before the streaming implementation, there will be a cost for these extra chunks. The new datamap format always returns a root datamap that points to three chunks. These three extra chunks will now be paid for in uploads.
So has the minimum size of a self encrypted file grown from 12MB to 28MB?? (3 chunks originally + 4 extra chunks)
Also, is there a way to stream out a portion of a file, like what @Traktion is doing in tarchive?
Any example code of streaming downloads/uploads? In particular, is there a way to query a running download/upload operation to get the completion status or do we need to build this ourselves? Maybe Dave is doing this under the hood and we can just use that implementation?
And a +1 to the public scratchpads being added to the standard API that @riddim brought up, everyone is using these.
Agreed, my intention isn’t clear in my question. Basically I’m trying to figure out the minimum size of a file (or collection of files) to minimize ETH gas fees. Because as I understand it, a chunk costs the same whether it has 1 byte of data or 4MB of data. It used to be 12MB, now its 28MB, so it becomes even more important to pack small files to keep costs down.
Sounds like maidsafe added self encrypted metadata as extra blob to every upload now to enable streaming.
Traktions implementation of streaming does work without any extra chunks - was this looked at..? Especially for small files that overhead resulting in 7 chunks even for the smallest public files looks not desirable…
such datamap chunks (as called for convenience to be differed as normal chunks of content) are normally being very small, and will slowly stepped growing based on the file size, from around 100 Bytes each upto 4MB, with a step of around 30 Bytes per 4MB larger of the file zie.
it’s not metadata to be added, just ensure the returned datamap always points to 3, which could be 3 content chunks for smal sized file, or datamap chunks of a root datamap for large sized file.
This eliminates an upper layer of datamap recursive to support super large sized datamap, or hassle of private archive handling large sized datamap, i.e. archive data now can be mostly small and almost fixed sized.
I re-uploaded a file that was uploaded before with an earlier version. This new file cannot be downloaded. I have experienced this with three different files. Old addresses work for downloading, but I don’t have that for this one.
Bigger files (around 200MB) always fail at chunk 9.
Terminal output below, can post logs tomorrow.
(Heading to sleep, won’t comment more today.)
toivo@toivo-HP-ProBook-450-G5:~$ time ant file download ec3c02edd203c394f1b41148f9648755f9886533ab73ecf989ceb575b28806c5 .
Logging to directory: "/home/toivo/.local/share/autonomi/client/logs/log_2025-09-04_23-35-44"
Connecting to the Autonomi network...
Connected to the network
Input supplied was a public address
Using lazy chunk fetching for 3 of datamap DataMap:
child: 1
ChunkInfo { index: 0, dst_hash: 7d34f0..f42b62, src_hash: be9bd6..aff9a8, src_size: 2936 }
ChunkInfo { index: 1, dst_hash: 761d70..d37873, src_hash: 3daf89..68362f, src_size: 2936 }
ChunkInfo { index: 2, dst_hash: 5e6fd7..948563, src_hash: 400dd6..b0b410, src_size: 2938 }
Successfully fetched chunk at: 7d34f0f485c1efddbc7babc8faa65dfb8e5484d23cc227b1b671fed654f42b62
Successfully fetched chunk at: 761d705101a59aa2c2449afaa1aee6e4ed34135173a284e1bd2974fc0fd37873
Successfully fetched chunk at: 5e6fd7f9a016c27948d4b860a359e85f56eb6415ca2d31171c119da6f5948563
Successfully processed datamap with lazy chunk fetching
Detected large file at: ec3c02edd203c394f1b41148f9648755f9886533ab73ecf989ceb575b28806c5, downloading via streaming
Using lazy chunk fetching for 3 of datamap DataMap:
child: 1
ChunkInfo { index: 0, dst_hash: 7d34f0..f42b62, src_hash: be9bd6..aff9a8, src_size: 2936 }
ChunkInfo { index: 1, dst_hash: 761d70..d37873, src_hash: 3daf89..68362f, src_size: 2936 }
ChunkInfo { index: 2, dst_hash: 5e6fd7..948563, src_hash: 400dd6..b0b410, src_size: 2938 }
Successfully fetched chunk at: 7d34f0f485c1efddbc7babc8faa65dfb8e5484d23cc227b1b671fed654f42b62
Successfully fetched chunk at: 761d705101a59aa2c2449afaa1aee6e4ed34135173a284e1bd2974fc0fd37873
Successfully fetched chunk at: 5e6fd7f9a016c27948d4b860a359e85f56eb6415ca2d31171c119da6f5948563
Successfully processed datamap with lazy chunk fetching
Streaming fetching 110 chunks to "." ...
Fetching chunk 0/110 ...
Fetching chunk 1/110 ...
Fetching chunk 2/110 ...
Fetching chunk 1/110 [DONE]
Fetching chunk 3/110 ...
Fetching chunk 3/110 [DONE]
Fetching chunk 4/110 ...
Fetching chunk 4/110 [DONE]
Fetching chunk 5/110 ...
Fetching chunk 2/110 [DONE]
Fetching chunk 6/110 ...
Fetching chunk 0/110 [DONE]
Fetching chunk 7/110 ...
Fetching chunk 6/110 [DONE]
Fetching chunk 8/110 ...
Fetching chunk 5/110 [DONE]
Fetching chunk 9/110 ...
Fetching chunk 8/110 [DONE]
Fetching chunk 9/110 [DONE]
Fetching chunk 7/110 [DONE]
0: Failed to fetch data from address
1: Failed to download file
2: Failed to decrypt data.
Location:
ant-cli/src/actions/download.rs:147
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
Successfully downloaded chunks were cached.
Please run the command again to obtain the chunks that were not retrieved and complete the download.
real 0m11.671s
user 0m3.469s
sys 0m1.836s
@qi_ma I’m still confused about the implications of this change. Can you elaborate it more, and also tell us what this means in terms of chunks when uploading a file. Is the minimum, using SE, still three or is it more?
It would be good to have a fuller explanation of what this means in terms of APIs, features etc as well.
the file was uploaded without metadata, means when to be downloaded, it now expects a file name to be provided, so that the streaming download can flush content to disk on fly.
When a folder ( like .) provided, the download fails when try to flush to disk, basically an OS file access error.
I tried to download with:
ant file download ec3c02edd203c394f1b41148f9648755f9886533ab73ecf989ceb575b28806c5 ./temp.1
and the 438 MB file can be fetched properly.
We may have an update later on to make sure the error to be more explicitly shown, or use a random name to flush to avoid the hassle.
Thanks! I think this shows the benefit of tarchives (or similar), as you could have, what 12,000 1kb files in a public tarchive and still only need 4 chunks.
Now after reading something David said elsewhere, I have another question, if I may,
The new datamap, David said self encryption will always return 3 records and a datamap pointing to those 3 records, never more or less. Then recursively till their is only one datamap left.
So is this where the file is broken up into 3 records sections and self encryption encrypts each 3 records + data map, and then 3 datamaps produce a higher level datamap and so on
So a file needing 9 records will result in
3 lots of self-encryption on records 1-3 and 4-6 and 7-9
3 datamaps from those 3
1 final datamap for the 3 intermediate datamaps
this pattern followed if larger file
And of course extra records if archive and/or public used