There’s a Safe App opportunity for someone here, both a stand alone app and as a library for use by any app which stores data:
Uploads don’t happen immediately but enter a staging process where the app monitors storage price and will wait until a sensible opportunity to store each chunk arises. Lots of scope for fancy algo, sexy UI and user twiddling of knobs and settings!
Sigh… how? Not like these two ways I tries, I see:
topi@topi-HP-ProBook-450-G5:~$ --log-output-dest upload_log
--log-output-dest: command not found
topi@topi-HP-ProBook-450-G5:~$ time safe files upload -c 50 --batch-size 20 Waterfall_slo_mo.mp4 --log-output-dest upload_log
error: unexpected argument '--log-output-dest' found
tip: to pass '--log-output-dest' as a value, use '-- --log-output-dest'
Then I aborted the fourth upload after one batch of 20 chunks
Then the upload failed three times in a row with: 0: Transfer Error Failed to send tokens due to Network Error Could not retrieve the record after storing it: 570596b8d4e95ea8026cbc3045cf17b9f3e1a0c43bcca2e57ad84d63e0a2d01f.. 1: Failed to send tokens due to Network Error Could not retrieve the record after storing it: 570596b8d4e95ea8026cbc3045cf17b9f3e1a0c43bcca2e57ad84d63e0a2d01f.
Logs attached. Toivos_upload_failure.zip (2.8 MB)
time safe files upload testfile5
Built with git version: 26c3d70 / main / 26c3d70
Instantiating a SAFE client...
🔗 Connected to the Network Total number of chunks to be stored: 1
Transfers applied locally
Error:
0: Transfer Error Failed to send tokens due to Network Error Could not retrieve the record after storing it: ac9d2bafa0d601f7929ba664434979869f64a7c82eef57f2d6a7d32945e87ad0..
1: Failed to send tokens due to Network Error Could not retrieve the record after storing it: ac9d2bafa0d601f7929ba664434979869f64a7c82eef57f2d6a7d32945e87ad0.
Location:
sn_cli/src/subcommands/files.rs:179
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
real 1m11.469s
user 0m11.507s
sys 0m0.418s
I’ve tried switching to the Safe nodes quoted in the startup top post. Although I can use mine for ‘safe wallet balance’ so I think it must be working as a target for clients.
safe@IntolerantNetSouthside01:~/.local/share/safe$ safe files upload -c30 --batch-size 40 /usr/share/doc/gawk/
Built with git version: 26c3d70 / main / 26c3d70
Instantiating a SAFE client...
🔗 Connected to the Network Total number of chunks to be stored: 113
Killed
No. All I have are some lo-res graphs of CPU, Disk and network utilisation. @Josh has a cunning script that might do it but it only takes a snapshot every 10mins as default.
Right now I am waiting for this upload to finish with -c 100 --batch-size 5
NOt wanting to say anything in case I jinx it
OK that finished successfully
safe@IntolerantNetSouthside01:~/.local/share/safe$ safe files upload -c100 --batch-size 5 /usr/share/doc/gawk/
Built with git version: 26c3d70 / main / 26c3d70
Instantiating a SAFE client...
🔗 Connected to the Network Total number of chunks to be stored: 113
Transfers applied locally
After 15.217836169s, All transfers made for total payment of Token(8744) nano tokens for 5 chunks.
Successfully made payment of 0.000008744 for 5 chunks.
Successfully stored wallet with cached payment proofs, and new balance 99.999479419.
After 29.030794695s, uploaded 5 chunks, current progress is 5/113.
’
’
’
’
’
’
’
After 17.556563423s, All transfers made for total payment of Token(1871) nano tokens for 1 chunks.
Uploaded chunk #f8afb0.. in 22 seconds
After 706.504649052s, verified 113 chunks
16 failed chunks were found, repaid & re-uploaded.
======= Verification: 16 chunks to be checked and repayed if required =============
======= Verification Completed! All chunks have been paid and stored! =============
Uploaded all chunks in 37 minutes 53 seconds
Writing 4538 bytes to "/home/safe/.local/share/safe/client/uploaded_files/file_names_2023-09-19_21-40-41"
so hat-tip to @TylerAbeoJordan for the -c100 --batch-size 5 suggestion - Verrrry slow though nearly 38 mins for 396k of gawk docs.
I ran the original command again and got a more familiar error
safe@IntolerantNetSouthside01:~/.local/share/safe$ safe files upload -c30 --batch-size 40 /usr/share/doc/gawk/
Built with git version: 26c3d70 / main / 26c3d70
Instantiating a SAFE client...
🔗 Connected to the Network Total number of chunks to be stored: 113
Error:
0: Transfer Error Failed to send tokens due to Network Error Not enough store cost quotes returned from the network to ensure a valid fee is paid..
1: Failed to send tokens due to Network Error Not enough store cost quotes returned from the network to ensure a valid fee is paid.
Location:
sn_cli/src/subcommands/files.rs:179
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
I had a quick and dirty memory logger script running which shows memory is tight ( you try running 30+ nodes on a 2Gb instance) but not critical. I will steal inspiration from @Josh to graph this and play with various values for concurrency and batch size if time permits.
I’m quite surprised by the variance in success rate of uploads: my attempts were all successful. I wonder if the difference lies more in bandwidth/latency rather than cpu/memory constraints.
Below is a link to a (partial) grab of the terminal output of a ~600 mb upload. I wanted to do an analysis of the variance in upload cost of chunks (the token(xxx) part), but I’m flat out on a bid. If someone would like to take that on, a graph showing that variance in time would be useful I think. I can’t see how the price can vary so much between batches (5 chunks batches, interval between them is only a few minutes, and I can see there are some major variances)
That sounds to me like you’ve aborted the command and corrupted the local wallet. So I’d assume it’s attempting to double spend for all further uploads.
If you erase your wallet, get from the faucet and try again, does that go?