I cant read that grey screen - can you paste it in here so we can read it, please?
Will do in the morning having to hit the hay as they say
I’m running my script for uploading 10 1MB files of random data with their checksums. However, I’m sure the output to stdout used to display the filename and the long URL for the file. Is there a way of getting these from the files in ~/.local/share/safe/client/uploaded_files
?
or is there some way of outputting them to stdout when they are uploading so I can log them?
Being able to download the files and the checksums to a folder and verifying the data is going to depend on this and is the next thing I want to write.
This is my rough and ready script. I apologise in advance for the brain damage this will cause in those who know what they are doing. You will literally become stupider from looking at it!
safe_upload_stressor.zip (1.6 KB)
I like it
Stop apologising and stick it on GitHub (or similar)
one thing I’d change
TEMP_DIR=$HOME/.local/share/safe/tools/safe_upload_stressor/temp
and for the logs
We’re missing the amount there. @Anselme is on the case just now so will be in for the next testnet.
In the meantime you’d have to query the node’s wallet key via the cli I’m afraid!
Its setting a semaphore which is used over the low level network PUT/GET (before any retries etc are applied). So it means how many of those operations can we in flight at once. So you should still be able to have more than max cpu count. It’s more likely you’ll get through more faster with more CPU though. (It probably has a tighter relationship with how long you’re at max cpu, perhaps?
So we think we have a fix for the store cost issue. What is a pain is that it appears to not be backwards compatible w/ how versioning the protocol works at this point… So we’re just debugging that, and if we can we’ll get it usable on this testnet for some more testing.
So I went to bed last night after starting the -c 200 --batch-size 2 round … but just up now and it worked!
First round of upload completed, verifying and repaying if required…
======= Verification: 197 chunks to be checked and repayed if required =============
======= Verification Completed! All chunks have been paid and stored! =============
Uploaded all chunks in 94 minutes 44 seconds
Does that indicate anything to you @joshuef ?
What I was expecting is that with -c 100 --batch-size 100
options, all chunk uploads should start approximately simultaneously.
But what I see is that after 1 minute of upload messages Uploaded chunk ... in 2 seconds
still appear.
What that chunk was doing for the rest 58 seconds?
Oh yes. We did decide tools ought to go in .local/share/safe/tools
One thing with that though is that the ‘cleanup’ recommended in the notes for each testnet says to do:-
rm -rf ~/.local/share/safe
which will delete everyone’s meticulously curated library of tools!
Maybe that could be changed to:-
rm -rf ~/.local/share/safe/client ; rm -rf ~/.local/share/safe/node
to avoid that?
@happybeing feat: log payment amount by grumbach · Pull Request #742 · maidsafe/safe_network · GitHub
Year of the testnet continues.
Any faster and some might consider the rate intolerable.
How is this done? Is it that the client pays 50% more than the price check? or is it the client is willing to pay upto 50% more if the price rises? I realise it will change, just interested.
I tried to upload a 550MB movie with following parameters and results. When it started well for 2-5 batches I aborted the process and started over with different parameters. It seems that once it starts failing it keeps failing even when returning to previously succesfull or smaller parameters. (Sorry, no logs.)
-c 100 --batch-size 1 Mouvie.mp4 SUCCESS
-c 100 --batch-size 5 Mouvie.mp4 SUCCESS
-c 200 --batch-size 10 Mouvie.mp4 SUCCESS
-c 200 --batch-size 20 Mouvie.mp4 FAIL
-c 200 --batch-size 15 Mouvie.mp4 FAIL
-c 200 --batch-size 12 Mouvie.mp4 FAIL
-c 200 --batch-size 5 Mouvie.mp4 FAIL
-c 200 --batch-size 1 Mouvie.mp4 FAIL
-c 100 --batch-size 1 Mouvie.mp4 FAIL
-c 5 --batch-size 5 Mouvie.mp4 FAIL
The error I got was always this, but with varying record identifier numbers:
0: Transfer Error Failed to send tokens due to Network Error Could not retrieve the record after storing it: 1d530ce5b8e47cb304095c5bca2a9888bc5dd793942691f9361ba8282590a566..
1: Failed to send tokens due to Network Error Could not retrieve the record after storing it: 1d530ce5b8e47cb304095c5bca2a9888bc5dd793942691f9361ba8282590a566.
I’m wondering could any of this be explained by faulty / underperforming nodes? I guess we don’t have pruning for them in place yet?
It could be waiting to be spawned on the tokio runtime, it could be trying to form connections to PUT the data. There are retries therein in case of connection issues eg. (Which do happen as we’ve seen w/ the storecost
stuff.
(which, btw, @Vort I saw your other error, it’s the same class of issue (we failed to send) so hopefully should be addressed by the upcoming fix )
Unable to upload files. I have done a cleanup several times but although successfully got tokens from faucet, I cannot upload files. Always error 13.
Mac OS Monterey 12.6.9
imac27@iMac-de-Imac27 safe % safe files upload /Users/imac27/Desktop/safecloset
Built with git version: 26c3d70 / main / 26c3d70
Instantiating a SAFE client…
Connected to the Network Total number of chunks to be stored: 4
Error:
0: Transfer Error I/O error: Permission denied (os error 13).
1: I/O error: Permission denied (os error 13)
2: Permission denied (os error 13)Location:
sn_cli/src/subcommands/files.rs:179Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
1 minute of waiting is too much to simply make connection.
I don’t know how exactly resources are allocated, but just from looking at client output it looks like some part of algorithm have concurrency way less than 100
.
It is not necessary a problem, but it is better to understand completely what is happening.
atm it’s straight up: Node says price is 10
, we pay 15
.
If the price changes, we still pay 50%
over that just now.
As mentioned above, theres a bunch more logic to come to smooth all this out.
It’s hard to say why connections are not forming, i could be either client or node suffering here.
There’s no fault detection yet if that’s what you mean? I’m not sure we should be pruning underperforming
nodes. That’s a bigger question of keeping things open to all comers
Yeah, and just thinking that now when we have a network that is open for everyone to join as a node, there might be poor connections, silly setups, and even outright malicious nodes too. Several years ago some testnets were taken down by bad actors, if I remember right. Maybe that’s not so likely cause to these problems now though.
My take on things so far regarding the future network - I think there needs to be tests run by the client during setup to determine CPU capability and max upload speed. Then using those metrics an optimal concurrency and batch size can be automatically configured.
@Josh can i get your Node11 logs? It’s the one making bank according to your charts (which are super neat
)