Latest Release March 20, 2025

Dunno what you are referring to here. That 50 I mentioned was just me making up a figure. that is significantly smaller than 300MB

5MB average per node is not going to have good nodes.

I have a script that can see if a node can give quotes. Takes a little time at this time because the network doesn’t always (rarely) get only the 5/7 closest

My 20 super magic nodes win fairly regularly:


Check out the Dev Forum

1 Like

You know if you really wanted to test the theory it is because they have been running since TGE. Reset them and start over.

You can just take a backup copy of the autonomi directory before resetting and then reinstate it a few days later to get back your privileged nodes.

2 Likes

They were older than Santa Claus, right? Never give up with them, I feel we are witnessing a born of a legend :grinning_face_with_smiling_eyes:

2 Likes

I reset them 14 days ago and reduced them from 40 to 20 :wink: They are officially immortal super nodes!


Check out the Dev Forum

3 Likes

I asked the team about this a week or 2 ago. As long as they are timeouts, rather that not found errors, they will probably be ok.

Not to say suffering timeouts is good. Just that data is unlikely to be lost, despite all the issues.

5 Likes

Reset as in antctl reset? Then delete the autonomi directory before adding them back? If not you prob just brought them back to life

1 Like

Yes, I cleaned everything. But it doesn’t hurt to reset them once more, What is dead may never die @neo !


Check out the Dev Forum

2 Likes

Noooo…! Dddoooonnntttthhhh…!

2 Likes

Another try succeeded, so it is wobbly but not broken :slight_smile:

Attempting to download from address: 7ca61972d90c00d5e2ec085c24a6c09d11eb602c27637490ec6fc9b7f7cc7351
Fetching single file data from 7ca61972d90c00d5e2ec085c24a6c09d11eb602c27637490ec6fc9b7f7cc7351...
Saving file to: "movie.mp4"
Successfully downloaded and saved single file.

I am pulling my hair out trying to get it into a archive. Why archives are such a pain in the rear does not make sense, other small uploads go up fine. Is there any fundamental difference or reason why? I don’t see one.

  --- Archive Upload Attempt 27/50 ---
Uploading public archive referencing 1 files
(1/4) Chunk stored at: aaf5f4a14967207f9927f2898a05f13fbcfe5fd6e73e234236ac21e4880e9ff8 (skipping, already exists)
(2/4) Chunk stored at: 8824cba97aa005768609fec02b5609bd54e4d3167e4ad6dc3c63d2226dda9419 (skipping, already exists)
(4/4) Chunk stored at: 01b2c1dc46909ace58f2b996fb331bbc9c2ff408930fcf1597c30869d9e6a7df (skipping, already exists)
(3/4) Chunk failed to be stored at: 429aa0dbee2e45ce3a34dd9ec4d9081883c312e3dc56d8e6388641dbb8803d9e (A network error occurred.)
(1/1) Chunk failed to be stored at: 429aa0dbee2e45ce3a34dd9ec4d9081883c312e3dc56d8e6388641dbb8803d9e (A network error occurred.)
(1/1) Chunk failed to be stored at: 429aa0dbee2e45ce3a34dd9ec4d9081883c312e3dc56d8e6388641dbb8803d9e (A network error occurred.)
(1/1) Chunk failed to be stored at: 429aa0dbee2e45ce3a34dd9ec4d9081883c312e3dc56d8e6388641dbb8803d9e (A network error occurred.)
  Archive upload attempt 27 failed: A network error occurred.. Retrying in 5 seconds...
7 Likes

to prove my point I just uploaded another 4 chunk file on the first attempt.
That archive is on attempt 27.

Configuration for after upload completes:
Download and verify the uploaded data afterwards? [y/n]: y
Create a new archive for this upload afterwards? [y/n]: n

Attempting to upload file with retries (max 50 attempts)...

--- Upload Attempt 1/50 ---
(2/4) Chunk stored at: 09ef1c88d7f2a5abcd07ead08060f486ebe9bf4b499dcc197197c2fc42db8227
(3/4) Chunk stored at: b0fe6bb47a263a74f09ea0cfe4b873048bfd4a410052c6052c29e77a7fac7b95
(1/4) Chunk stored at: 5ffe8cdfc7974389b16c56328223b343556fefba7dc40dd85f57afad9aa5e13e
(4/4) Chunk stored at: 6d05153665c855924cc57c0a95a61f655e156f6845f1888c72090e47760adbae
Upload successful on attempt 1!

Final Upload successful!
  Cost: 0.00000000000000000000081976314870 AttoTokens
  Data Address: 6d05153665c855924cc57c0a95a61f655e156f6845f1888c72090e47760adbae

Proceeding with download and verification as requested...
Downloading file using data_get_public...
Download successful! Fetched 1666945 bytes.
Verifying downloaded data...
Verification successful: Original and downloaded data match.
Saving verified file to "./ant.png"
Skipping archive creation as requested.

Upload process completed.
4 Likes

I know during test nets it was often best to upload the files separately, then upload the archive. The chunks were already present, so archive creation seemed to succeed more easily.

I wonder if a more sophisticated upload tool could help here too. It should be straighforward to manually upload chunk by chunk, storing which were successful in a temporary db/file. The app could then focus on retrying only the new chunks and finally the archive to pull it together.

Basically, a more sophisticated ant file upload tool.

Not sure how useful it would be long term, but it could certainly help in the short term.

I’m not sure whether everything needed for the above is exposed through the API (self encryption library, for example), but it should be possible with Rust.

6 Likes

Is the archive a different datatype? If so then there is different code somewhere for it in the node.

Then you are paying more. For each transaction there is a cost for the executing the contract along with the transfer(s). This is GAS fees

4 Likes

Ah, so they are bundled into the one contract transaction? Are they all bundled into one or do we have a group size?

The client bundles chunks together into a max of 256 transactions and the GAS fee is made up of the transactions paying for storage and the contract fee.

So by doing one chunk at a time you are then paying the GAS fee for one transaction and the GAS fee for the contract execution.

Thus a 3 chunk file using ``ant``` you pay the 5 transaction for storage and also for the contract execution.

By upload at one chunk at a time you pay (num chunks) x (transfer fee + contract fee)

3 Likes

I am having pretty good upload results by not forcing archives if @Southside wants to be brave and venture into the unknown as he usually does. see here Willie.

4 Likes

on it ill give it a go compiling now :slight_smile:

1 Like

start small not with mint :slight_smile: I am doing pretty well consistently in the under 10MB range. Downloads can take a try or two they do not currently loop until success.

2 Likes

to late id all ready pressed go when i read that post :laughing:

Initializing client...
Client initialized.
Setting up wallet from environment variable...
Wallet setup complete using provided private key.
Reading file: "/home/ubuntu/linuxmint-22.1-cinnamon-64bit.iso"...
Read 2980511744 bytes from file.

Configuration for after upload completes:
Download and verify the uploaded data afterwards? [y/n]: y
Create a new archive for this upload afterwards? [y/n]: n

Attempting to upload file with retries (max 50 attempts)...

--- Upload Attempt 1/50 ---

2 Likes