HeapNet2 [Testnet 12/10/23] [Offline]

That’s great you’ve got some now.

I have to say I still only have 12 with chunks so no improvement in 12 hours.

I think it’s time to upload some more data to see if the other nodes are broken or just haven’t been needed yet.

1 Like

If the file is working, i think we’re just counting the datamap in the upload count, but not the download. A bug in what we print out (but not in downloading). If that is the case at least?

Honestly not sure on that. cc @roland , maybe you know?

:+1:

Yeh that’s fair enough.

@chriso what are your thoughts? I’d assumed the current safedir was more of a vendor location than a general one. Am I off?

Is there a better “general safe tools” location?

I wonder if it was alllll chunking up until then. I’ll dig into the logs monday for a clearer view there, thanks @happybeing.

(If it was chunking all that time (27gb is a lot), we’ll need to get it uploading after certain quantities of chunks are done vs waiting on full chunking to complete before uploads… that’s probably just a better way in general, as chuncking large files can take a while. cc @qi_ma, one for us to think over! :bowing_man: )

4 Likes

What is the tools directory being used for currently?

1 Like

I have 4,3GB folder going. Could be done in 5 hours.

Does anyone have any ideas how to set ideal batch size? I’m now using 50. I’m on:

  • HP HP ProBook 450 G5
  • 8,0 GiB
  • Intel® Core™ i5-8250U CPU @ 1.60GHz × 8
  • Ubuntu 22.04.3 LTS

Ping @Southside @happybeing @josh @TylerAbeoJordan

1 Like

@Josh’s graphing tools GitHub - javages/ntracking: Safe Network Nodes Stats & Tracking.

its my fault, I helped him write a script around his clever python and stuck it in {SAFEHOME}/tools

Its trivial to change it.

1 Like

Thanks for the clarification, I agree that we should not complicate things during testing and preferably put the files in one place.
Does this mean that I should create a safehome/client directory in the default location?

However, I was thinking that if we have the option to download files to a default location, or we can specify a directory of our choice, then that could be another drive and I wanted to test that.

Perhaps the TestNet manual should specify a default location and say that it is best not to change it for ease of testing?

1 Like

No - I use {SAFEHOME} as a catchall for the various places that the /client and /node dirs will be created in depending on which OS is in use.

the safe and safenode binaries will create these dirs for you

1 Like

I’m not sure if you’re aware, but there’s a commit already in that repo that changes the location: changed location as Joshuef suggested · javages/ntracking@3f2a1da · GitHub

Worth bearing in mind that ~/.local/share is supposed to be for storing data related to your app, so it’s probably not the best location for supplemental tooling. It might be a reasonable location for them to output any data to, though.

5 Likes

Hey, I think the brackets around the dir/filename would not be necessary there. Also, you should pass in the XorName of the file that you’re trying to download. You can obtain this XorName after you’ve successfully uploaded a file, for e.g., from your previous upload, you should copy the 9b69.. part.

Uploaded 1677654404651_0.mp4 to 9b69ce01fdd7c47848249f5effce737a19f58414a9abad8ea26b93e2f9c4e8c9

Now, your final command should look something like this,

safe files download E:\gggg\Videos\1677654404651_0.mp4 9b69ce01fdd7c47848249f5effce737a19f58414a9abad8ea26b93e2f9c4e8c9

Could you try running the above and see if it helps?

3 Likes

I think specifically for the location of downloaded files, an environment variable override would definitely be useful.

The default location is logical, in the sense that it’s going to the same location as other data related to the client, but it’s a bit cumbersome for something you would want very frequent access to, especially on Windows.

4 Likes

Prep to retry 27GB Videos/ upload with the latest client (v0.83.47) and 300 SNT in my wallet!:

[snip]
Verifying transfer with the Network...
Successfully verified transfer.
Successfully stored cash_note to wallet dir. 
Old balance: 200.000000000
New balance: 300.000000000
Successfully got tokens from faucet.

Now kicking off the upload again (with a new client version 0.83.47):

$ safe -V
sn_cli 0.83.47

$ time safe files upload Videos/
Logging to directory: "/home/user/.local/share/safe/client/logs/log_2023-10-13_20-42-52"
Using SN_LOG=all
Built with git version: 1447a42 / main / 1447a42
Instantiating a SAFE client...
[snip]
Chunking 129 files...
[00:03:40] [##>-------------------------------------] 9/129  

Chunking complete this stage has only just started but looks much healthier than before. Maybe it helps to have SNT in my wallet :thinking:

Input was split into 53451 chunks
Will now attempt to upload them...
⠤ [00:05:50] [>---------------------------------------] 91/53451

I’m not sure what this means…

⠉ [00:13:05] [>---------------------------------------] 191/53451               
For record a34cd9a7cfb516310caa98fa6e08544034eb93c3e60093b15f98b169a3b6e49f task QueryId(488), fetch completed with non-responded expected holders {PeerId("12D3KooWLuGL2D9qbjZfuybncNDqRp5XpdL5gAwAwLuTefvrsKbj")}
⠂ [00:35:14] [>---------------------------------------] 485/53451               
For record c22f1c4dc69d9f5a4ab2dcbbccbc80358993e6ed8af26e53d6d01c09f1842e5b task QueryId(1182), fetch completed with non-responded expected holders {PeerId("12D3KooWG5Waxm5rqRVb4QiZFqAyWFfHDEAnEK1yUtjAVbh5gDse")}
⠒ [00:41:11] [>---------------------------------------] 564/53451   

11 hours on from the above and it is only at 5061/53451 though otherwise looks ok.

This means it is proceeding much more slowly possibly because it is actually uploading (using latest v0.83.47 client and SNT in my wallet :man_facepalming:).

I’m seeing the “For record … fetch completed with non-responded expected holders” error quite regularly (say one per 30 mins) but otherwise it looks like it is progressing ok as far as I can tell but will take a further 90 hours to complete. :grimacing:

Data Usage

I had 70GB mobile data remaining yesterday and it has used 30GB overnight so I’ll stop it now as it doesn’t renew until November!

A very rough guess then is that it will take >~ 300GB to upload 27GB of files :thinking: That’s not a reliable calculation but another area to begin poking soon! Anyway, I’ve now killed it. :cold_sweat:

Observations:

  • after chunking 32/129 files the log shows “Client inactivity” so nothing else is happening until all 129 files (27GB) are chunked. Is that desirable?
  • data use and upload time are areas to look into at some point
  • nodes do join eventually and are not failing (so far after 39 hours) but I think the ones taking a long time are doing so because they appear not near any data in the address space (based on them having very few chunks, 6 compared to an average of 199 and earlier participants all seeming to have many times more chunks ~150+). So I think we need to understand both why some take a long time to join and why they don’t get many chunks when they do.
5 Likes

UPDATE 27 hours: I have 17/20 nodes earning and none killed/lost! :partying_face:. Earnings are 184,000 nanos with a highest earning node of 54,000 nanos and a total of 3054 PUTS. RAM use is 32MB-50MB averaging 39MB which is awesome.

14 Likes

129/160 node earning no nodes dead killed yet

earnings are 1530828 nanos all in

mean ram about 50 accross all vps’s

11 Likes

I now have 21 nodes with records. So that’s a big improvement. I don’t think it can just just me uploading some files.

Regarding an optimal batch size I don’t know. I just use whatever the default is. It probably doesn’t matter because I’m only uploading 11 files at at time - 11 x 1MB files and a file of less than 1KB with the md5sums.

6 Likes

Another vdash update… v0.11.3. With this update you can select a node in the Summary list and view it in full when you switch to the Node dashboard by pressing ‘n’. When you switch to the Summary dashboard, the last viewed node will be selected in the Summary list.

11 Likes

@joshuef could you take a look at this log if you have time.

safe files download safe.log 37d06b5aa9712c6e9a1cf0b90aad05bc2b0f21b7cb3db60f516e459b8b92be26

is from downloading this file

safe files download AnarchyInTheSouthside.mp3 eaa0b39813183323b491e6715a0b4ea9f3cfdeede6f04a73c5b849b5210ad20d

i can download it just fine on an oracle vps and from a good internet connection of 200 up/down but its failing on my home connection.

also another user who is not on the forum yet along with @Southside and @Toivo are unable to download it at home.

also a similar thing happening with other large files that are fine on my oracle vps but not working from home connections.

just curious why its failing for some and not others

5 Likes

Count your nodes with records:-

ls $HOME/.local/share/safe/node/ | while read f; do echo ${f} ; ls $HOME/.local/share/safe/node/${f}/record_store | wc -l ; done | grep -v 12D | grep -v '^0' | wc -l

But I’m sure there is a more elegant way of doing it.

I’ve not seen this error before when uploading files:-

Input was split into 41 chunks
Will now attempt to upload them...
For record 6dd9d726ab28acca66cf033215b70bf0ee0348eaca8fee27540cd99290e09611 task QueryId(68), received a copy from an unexpected holder PeerId("12D3KooWERifuKQG7SnBd2RX7Lbymfvmih8SDs4N67EqgdpwDiSs")
For record 6dd9d726ab28acca66cf033215b70bf0ee0348eaca8fee27540cd99290e09611 task QueryId(69), received a copy from an unexpected holder PeerId("12D3KooWERifuKQG7SnBd2RX7Lbymfvmih8SDs4N67EqgdpwDiSs")

What could that possibly mean?!

4 chunks failed to upload on the first attempt. 2 of those failed to upload on the 2nd attempt. 1 still failed but was then uploaded.

These are the files if that helps:-

Uploaded Run_55_1MB_1 to 17b7703d25aa23af2d06cc1cc1e917631d7b9377120a563d55ebf66d9365ac24
Uploaded Run_55_1MB_8 to 24883d4302e5db90083b56eb085f76da14b51e1d1b90373ed7bf46cf0133498d
Uploaded Run_55_1MB_5 to 2e686093d5f140da5c1ccaa91f2f419f1e59916d4ecbe39867694f2cf1d94eae
Uploaded Run_55_1MB_9 to 32b5c18081058f6653a8b8b54b7c5f7145d0e31a98b39c307d82833f1b184feb
Uploaded Run_55_md5sums_10x1MB to 4e25347ae1393246bde8790867ea578edcb878dcbf6c5cc2c4c4a27ca82f3c19
Uploaded Run_55_1MB_2 to 57f4b41ff89c216b261f2f8094fe91a10e15b93fbad4974ff608e59162e78271
Uploaded Run_55_1MB_6 to 5e64903f38d07213e55b927a1ba6be9c207c2a9ff89883c81c1d66159509a761
Uploaded Run_55_1MB_10 to 65017abe44deeb2aa92eb01a4cbaa9adce14f9e534d54568f8789afb3440fd2b
Uploaded Run_55_1MB_3 to bc869df42cb86810d8be70906163f410424161fd9d635ccbbaba79c6dca55dfb
Uploaded Run_55_1MB_4 to df4ba62987020e64265ca7b7d4028e9978cecb620c999c2bc6c17cfd7014a04d
Uploaded Run_55_1MB_7 to e15578a26f9a176167b328918ba8eb7df43d4fb5b571a7687e42e149206bee7e

and the log is attached in case that helps.
safe.log.zip (103.6 KB)

2 Likes

From what I have seen upload speed is defined by HDD speed and CPU performance. You can try moving everything on SSD drive, or having source files and /tmp on different physical drives.

3 Likes

I think it’s more about your connection to the network. I use batch size of 5 to 10 … anything more and I start getting many fails. But I’m on a shared home connection, so quite limited.

2 Likes