is there a max file upload size for the network curently ?
I have been trying to upload a 6Gb Ubuntu iso and it is reporting a successful upload.
but every time it completes it gives a diferent address and all of them fail for downloading ?
Logging to directory: "/home/.local/share/safe/autonomi/logs/log_2024-11-15_12-40-50"
autonomi client built with git version: stable / 0205c20 / 2024-11-12
Uploading data to network...
Uploading file: "ubuntu-24.04.1-desktop-amd64.iso"
Upload completed in 395.384934257s
Successfully uploaded: ubuntu-24.04.1-desktop-amd64.iso
At address: 90c6ae6ac14ac7b6970623ef4eaa9b755ea048c80d2859571189c5975126b47e
Number of chunks uploaded: 7
Total cost: 7 AttoTokens
its only costing a few attos each time now so I am guessing the chunks are uploaded.
There is one limit I thought of while reading the latest replies, the self encryption process stores a copy of all the chunks in the chunking directory and as such you require an equivalent amount of disk space as the size of the file. The chunks used to be stored in the users home directory path and as such would use the storage medium it is on. I think someone said it is now in a different place (/tmp ?)
But in any case you require enough storage on the disk that the chunking directory exists in.
Thus if home is on a 1TB drive and the file is on another drive and close to 1TB or larger then you cannot chunk it
If the home disk only has 100GB free then you’d be limited to a file less than 100GB in size
I seem to remember talk of it not being necessary to store all the chunks before starting to send them and that was an optimisation that might come later.
Yes it is possible, only need to get the last chunk size from file, and use that to start the self encryption process and kep the first chunk size of the file.
Then only need 3 chunks in memory at a time. last+first+second allows first chunk to be created, then first+2nd+3rd allows the 2nd chunk to be created and so on till last chunk which needs 2nd last+last+first part of the file.
The last I knew they stored all chunks on disk first to save time when a chunk needs to be uploaded again.
We need the reverse for video etc streaming, but hopefully this implies that is on the way too. Unless it is in there already, I didn’t check! I need both for my backup app so
Oic, thought it was to encrypt. Still, sounds good for streaming downloads more efficiently. Iirc, i butted up against that with sn_httpd streaming in old API version.