Good idea. I’ll work on it later tonight.
By the way, what’s the situation with self-encryption at the moment? Are these files we upload now encrypted?
Been testing from my cloud instance…if I stop tasting wisky I’ll try from syd connection…lux aussie 50mb connection third world anywhere else pretty much…
Here is a container with 2000 x 2MB files:-
“safe://hyryyryubtjfaokg9ag58a1j3jdxe4hutcehcrdikygketfmhkw7rcjptuynra?v=hhg7m7s4n9qibzce5oyidzsykpq9mqokczcr9db8o61m8i4drum6o”
To download the whole lot do:-
safe files get “safe://hyryyryubtjfaokg9ag58a1j3jdxe4hutcehcrdikygketfmhkw7rcjptuynra?v=hhg7m7s4n9qibzce5oyidzsykpq9mqokczcr9db8o61m8i4drum6o”
Then if you want to check the validity of the download you can do this:-
cd into the directory
Generate a file containing the md5sums of the downloaded files:-
md5sum * >> md5sums_downloaded
Then see if the md5sums are the same between what I uploaded and what you downloaded:-
diff md5sums md5sums_downloaded
The command prompt should return after printing nothing if the md5sums of the downloaded files match the files I uploaded. If there are any differences in the md5sums for a file you’ll see output with the name of the file.
Unfortunately I didn’t keep any stats from the last test but I have a feeling both uploads and downloads are quicker! I’ve seen others comment something like it’s nice and fast but are there any official figures? I know we’re focussing on reliability and seeing where it falls over but is there any chance the work in those directions has had a nice side effect that is being measured?
Yep, all files >=3072 bytes, since a minimum of 3 chunks is required to do self-encryption, and it’s a min size of 1024 bytes per chunk!
(But you don’t get more than 3 chunks up to 3 MiB, since it will pack up to 1 MiB per chunk. Edit: yep, confirmed.)
How are the node storage levels looking?
Just woundering how far we have to go?
Nice. One file 2MB_701 is listed in your md5sum file but missing from the upload. Did you get an error?
Currently:
metricbeat is happy, no excessive mem/cpu etc. I can dl all test-data. And seemingly some nodes are now exceeding 50gb stored
It’s worth noting too, that these nodes are actually running on smaller droplets than previous too 2vcpu2gb as opposed to 2vcpu4gb. So I’m doubly happy with this.
node-16: 29G total
node-13: 30G total
node-12: 29G total
node-15: 29G total
node-18: 29G total
node-23: 44G total
node-11: 29G total
node-22: 44G total
node-24: 29G total
node-14: 37G total
node-10: 51G total
node-3: 29G total
node-2: 29G total
node-9: 44G total
node-25: 52G total
node-7: 37G total
node-21: 44G total
node-20: 29G total
node-6: 37G total
node-5: 44G total
node-17: 37G total
node-1: 29G total
node-4: 44G total
node-8: 44G total
node-19: 54G total
what happen when I upload folder with files, but it wasnt finished co I close cmd window, and today I trying again with same folder name and files.
What’s the sampling interval for total transferred @joshuef ?
I’ve upped about 13GB in three batches on various sizes.
First batch of 1.6GB took almost three times as long as in the previous testnets. Might be poor wifi or maybe the load other uppers… Other batches were about as quick as in the previous run.
No errors. I haven’t downloaded anything yet.
No problems with memory or anything.
What’s the sampling interval for total transferred
last 15 minutes
the uploaded data will be deduplicated on the network. If we were paying for data, you would not have to pay again, but you may want to rerun the whole upload to be sure it’s all there.
I’ve updated the script. It now counts errors.
So that is for the person who originally uploaded it using the same “quote”/“payment DBC” as proof of payment.
For another person uploading the same data, the last I heard they would also pay for it even though its already there. The elders/nodes are still doing work to receive the data and also increases the revenue the network gets.
For others they should do a check to see if the chunks exist or not first and save themselves some SNT. I am sure any app doing uploads would do a check.
and its easy, just do a dry run upload and you see the xorurl and then safe dog xorurl and you see if its there
now that I thought of this, its easy for anyone to make filters for safe! someone who makes the filters could collect all the content that would go to a filter, do a dry run upload and add that xorurl to their filter
Correct
aye
Great work everyone pushing this test hard and well done team for crushing so many bugs.
I’m looking forward to running a node when we get to testing the new node join fixes.
Assuming success, next test might be:
- node join but no splits
- node join with splits
Or is there still stuff to try without joins?
EDIT: I guess we may need to test pay to upload before getting back into join territory.
my script’s verify function now works! Script will download while also uploading next file in parallel and will verify the upload with md5 checksum. All is logged. Currently download errors are not caught or logged - not sure if that’s going to happen often, but can add in later if useful.