Error:
0: Transfer Error Failed to send tokens due to Network Error Could not retrieve the record after storing it: 86e8f5bbf9c9e25cbb0c84362283316ecf5aa1d54843c3b789f3f3440a91ca84..
1: Failed to send tokens due to Network Error Could not retrieve the record after storing it: 86e8f5bbf9c9e25cbb0c84362283316ecf5aa1d54843c3b789f3f3440a91ca84.
Luckily an upload again in a while
Uploaded BladeRunnerBlackLotus.mp4 to bb6a0e3d6573863962890523e0e6f9e22425eb544865132a1549fa4153a49124
Client (read all) download progress 30/31
Client (read all) download progress 31/31
Client downloaded file in 8.147327472s
Saved BegBlagandSteal.mp3 at /home/safe/.local/share/safe/client/BegBlagandSteal.mp3
real 0m9.895s
user 0m2.010s
sys 0m2.180s
time safe files download BegBlagandSteal.mp3 80744b3d25bab269cab54e8baccf4f54f1aa01615230b99171bc3576c1ca7230
EDIT: I can fairly consistently grab BegBlag from my cloud instance in ~10 secs. Occasionally the process is killed - presumably running out of memory as I have 40 nodes running on this 2Gb 2VCPU instance as well.
However from home on a much beefier machine I consistently get errors similar to
🔗 Connected to the Network Downloading BegBlagandSteal.mp3 from 80744b3d25bab269cab54e8baccf4f54f1aa01615230b99171bc3576c1ca7230
Client (read all) download progress 1/31
Client (read all) download progress 2/31
Client (read all) download progress 3/31
Error downloading "BegBlagandSteal.mp3": Chunks error Chunk could not be retrieved from the network: f6515a(11110110)...
cloud instance is running sn_cli 0.83.26
home machine is sn_cli 0.83.28
FURTHER EDIT:
ran safeup client on both machines
Much faster and reliable on both
cloud box is now grabbing BegBlag in ~4 secs
Client (read all) download progress 30/31
Client (read all) download progress 31/31
Client downloaded file in 1.930989474s
Saved BegBlagandSteal.mp3 at /home/safe/.local/share/safe/client/BegBlagandSteal.mp3
real 0m4.068s
user 0m1.855s
sys 0m0.766s
safe@MemDebugNet-southside01:~$ safe -V
sn_cli 0.83.31
At home on a bog std ADSL line its consistently 23 - 27 secs with no failures so far
Client (read all) download progress 30/31
Client (read all) download progress 31/31
Client downloaded file in 23.364606118s
Saved BegBlagandSteal.mp3 at /home/willie/.local/share/safe/client/BegBlagandSteal.mp3
real 0m24.952s
user 0m4.971s
sys 0m1.888s
willie@gagarin:~/.local/share/safe$ safe -V
sn_cli 0.83.31
Input was split into 6211 chunks
Will now attempt to upload them...
Error:
0: Transfer Error Failed to send tokens due to Network Error Record was not found locally..
1: Failed to send tokens due to Network Error Record was not found locally.
It died after 17 minutes, somewhere around 1000 chunks I think.
I see a headache rapidly aproaching as I am way out of my depth.
I see in both runs a huge amount of these, on both valgrind runs. ==1442== Warning: invalid file descriptor 1031 in syscall accept() ==1442== Warning: invalid file descriptor 1031 in syscall socket()
I see memory gaps, they appear to be dealt with, but new ones appear. Not sure if they accumulate over time.
Going to jump out of this pond though and leave it to the pro’s (you)
josh@pc1:~$ safeup client
**************************************
* *
* Installing safe *
* *
**************************************
Installing safe for x86_64-unknown-linux-musl at /home/josh/.local/bin...
Retrieving latest version for safe...
Installing safe version 0.83.31...
[########################################] 9.02 MiB/9.02 MiB safe 0.83.31 is now available at /home/josh/.local/bin/safe
@joshuef It failed but again not with a hang so atm I can’t replicate the issue I had.
Chunking 129 files...
Input was split into 53451 chunks
Will now attempt to upload them...
Error:
0: Transfer Error Transfer error: Not enough balance, 0.000000000 available, 0.000056400 required.
1: Transfer error: Not enough balance, 0.000000000 available, 0.000056400 required
2: Not enough balance, 0.000000000 available, 0.000056400 required
Location:
sn_cli/src/subcommands/files.rs:210
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
real 12m17.520s
user 9m21.421s
sys 2m40.334s
mrh@plUto:~$ safe wallet balance
Logging to directory: "/home/mrh/.local/share/safe/client/logs/log_2023-10-06_18-42-34"
Using SN_LOG=all
Built with git version: 379bf7c / main / 379bf7c
321.999732525
$ safe -V
sn_cli 0.83.31
Anyone with the new version will need to clean out their client wallet.
edit: thanks @peca , that’s highlighted that we’re not retrying to PUT as we should be in there. As far as I can see there’s not concrete reattempt if the first was not verified for whatever reason. This would indeed make payments more brittle.
Going to head off for the evening. I hope no one is dispirited by the seemingly plentiful client errors. But I feel like we’re chipping away at some things that have been hidden or just not possible to reach so far. I really don’t see anything fundamentally wrong here, just us not coping with some network realities (or bugs in doing that). So i’m hopeful we’ll get something more stable with these bigger files soon.
I missed that sentence, thanks for quoting it @josh.
I hope people aren’t seeing us smash these bugs in a bad light too!
What folks are witnessing here is rarely seen from outside and for such a ground breaking project, a real privilege not just to witness but participate in.
Folks here will be able to tell their grandchildren they helped test this stuff and worked with the team that built it.
Imagine being a part of the Apollo programme and the first moon landing. That’s us, and everyone at MaidSafe.
When there are a few testnets not running smoothly it can seem a bit like it’s heading towards a re-run of previous ‘close, but no cigar, and now we need a fundamental re-think’ scenarios, but the stability on some of these testnets is unprecedented, and the functionality on show is so close to MVP, with payments for storing, nodes earning, demand / supply based pricing etc. It certainly feels different, and close.
It won’t work perfectly until it does… and when it does, well… what an accomplishment it will be!