HeapNet2 [Testnet 12/10/23] [Offline]

If possible, is it ok for me to have a list of all your 40 nodes, with those 10 nodes marked for not having any record?
And, one or two logs among that 10?
Thank you very much.

6 Likes

Yes, I have had several successes in a row now.

DL speed is about 40 MB/s.
Speedtest gives a result of 385 MB/s.

I think the 40MB/s is pretty good though.

6 Likes

safe 0.83.47

Client (read all) download progress 3105/4070
Error downloading "ubuntu.iso": Chunks error Chunk could not be retrieved from the network: 72a020(01110010)...

I have a decent connection and tried on a new fairly decent laptop, will go give it a shot on the pc.

ubuntu.iso a5a9be00406ce5c57fc7e43fa32440db8ba2d35b97627c9f56371282b458220a

I did upload it with the OP version of safe no issues, but could that be the problem?
(using latest to download)

Edit: failed on a different chunk with the pc.

Error downloading "ubuntu.iso": Chunks error Chunk could not be retrieved from the network: b1f760(10110001)...

Starting a new upload with the latest safe to see if that is the cause.

1 Like

Yes, absolutely.

I’ve been lazy though and given you all the node logs in an 80MB .tar.gz file. But it might also be heplful to have them all so that’s my excuse.

Uploaded node_logs_202310151006.tar.gz to 3c7c5b2afd1aa66495ba72e95b99b03c7be0b56670f59fcaeb4a0ac35fd56306

Here is the list of nodes and their record count:-

Uploaded node_list_with_record_count to 4d244a3c31eece8ab9641824e81c0aacafaac3d1067f0c287858698ae2ad7e20

The ones that don’t have any records yet are clearly visible eg.

12D3KooWFusK11hHrEGR9jWfba3R28KxG7LzecTdzs1gFYi4Bks5
0

If you can’t get these files or the 80MB file is a problem let me know and I’ll do the finding of just a couple you asked for!

But I’m hoping with the new stability of uploading and downloading you can get them instead of the soon to be old fashioned uploading to google or dropbox etc.

I’m now up to 32/40 with records.

6 Likes

I think you are confusing megabits (Mb) and megabytes (MB).
48 MB/s = 384 Mbps …that download is utilizing 100% of available speed

8 Likes

Yeeeeaaahh… something in those results felt a “bit” off.

Great news! :beers:

4 Likes

Attempted to catch up on the metrics ERs and newer endpoints exposed by safenode pid.

Just got my node up and running for past hour or so:

Few Observations:

  • Didn’t setup any docker setup inside the LXC in regards to the newer functionality with prometheus and grafana features that were added recently
    • Even though docker can be setup to run inside a LXC, it requires a few more workarounds in terms of configuration steps, and I wanted to avoid all that.
  • I don’t run a prometheus server at home for time series DB etc, so I tried recompiling the safenode pid with otlp, and open-metrics flags on.
    • I set the OTEL_EXPORTER_OTLP_ENDPOINT accordingly, but was not able to trigger any data coming into the URL set on the above env. variable from safenode pid. My guess is this functionality isn’t fully enabled yet for all different components. I ended up not spending a lot of time on further investigating if the OTLP exporter route was functional or not.
  • In order to get the newer metrics exposed from http://0.0.0.0:<randomPort>/metrics, I had to parse the logs to figure out the active port that safenode binded to.
    • Is this currently configurable or will be in the future when starting up safenode via CLI?
  • I was able to parse the metrics from the /metrics endpoints (prometheus / open metrics format), and convert, and store them in the format of my choice for my backend
  • When kicking off safenode pid, I couldn’t find any SAFE_PEERS instructions in the OP, and I thought I recalled reading at some point this was maybe automatically handled by safeup and other setup pre-reqs (I could be way off here). Either way, my safenode pid when started from command line with --root-dir --ip --port --rpc --log-output-dir couldnt really connect to any existing peer to bootstrap. After digging around on this post, I stumbled upon the URL for the network-contacts file, and was able to manually kick start the safenode pid after setting the SAFE_PEERS env. var manually. Maybe I missed something on this topic here for a cleaner & simpler auto-bootstrap process?
  • I am not sure yet if the “No data” shown on the grafana above will populate or not, as I have not received any chunks in the record_store directory etc. I will have to re-verify the parser logic as well as check if there is incremented counts reflecting GET or PUT requests on the /metrics endpoints, if available.
  • Added in panels to show LibP2p Swarm, Identity, and KAD statistics as well from the /metrics endpoint.
10 Likes

I’ve been switching between wifi and cable, and it seems to me that nodes keep up through these switches, but any ongoing up/download aborts because of that.

1 Like

Here is something new I’ve not seen before. Or if I have I’ve just not noticed.

RAM usage seems to be correlated to GETs. On this node it shot up to 146MB but once the high amount of GET activity subsided the RAM usage went right down again. But maybe not all the way down.

It’s not just this node - I can see it on a couple of other but this is the one that is most pronounced.

Maybe this provides an insight into the memory leak? If it still exists - things seem a lot better with this testnet dedicated to looking at RAM. Ironic, but often the way!

Here is the log in case it is useful:-

Uploaded safenode.log.gz to b45c9ddf0f639be6e8d51e327f42261a554c76b4e189c95b4e6c51a754887b3d

I’ve been having a look through the log myself.

This heavy GET work seems to start at 2023-10-15T02:57:27

This only lasts for a couple of minutes. The RAM usage jumps up from 60MB right before to 143MB by the time it finishes and then subsides a bit not totally to where it was before.

9 Likes

For interest, that can be due to heap fragmentation rather than a leak.

The way to investigate that is to identify the allocations leading to the increase (eg using heaptrack) and then examine the code to see if there’s a way it might not be being freed.

4 Likes

Thanks, got them smoothly. :grinning:

Will have a detailed check on them.

Thanks again !

11 Likes

Almost there guys!! :clap: This one is super smooth.
I just had a hard time to get all the chunks on this one :

Client (read all) download progress 549/1801
Error downloading “The.Graduate.1967.REMASTERED.720p.BluRay.900MB.ShAaNiG.mkv”: Chunks error Chunk could not be retrieved from the network: 7f3e5a(01111111)…

After 5 attempts each time with a different failing chunk, I could finally download it :partying_face:

Client (read all) download progress 1801/1801
Client downloaded file in 96.179138041s

11 Likes

This one still downloaded without problems.

10 Likes

the quiet in hear is deafening means we are finding less and less to moan about lol

I am scraping the barrel here but i can not download this kali Linux iso at home or on cloud upload was meant to be successful if anyone fancies trying a 4Gb download please try and report back :slight_smile:

safe files download kali-linux-2023.3-installer-amd64.iso 1f5254f8dec87b7e4f17667cb830a043f7ac3bfc3cd2f9e1442693a3219acaf4

it is giving me this error

Client (read all) download progress 7872/8000
Client (read all) download progress 7873/8000
Error downloading "kali-linux-2023.3-installer-amd64.iso": Chunks error Chunk could not be retrieved from the network: d4823b(11010100)...

6 Likes

firing up another 50 nodes on fresh vps just because

4 Likes

My 2Gb VPS runs out of memory after a few hundred chunks on that kali download.

2800/8000 so far from home

1 Like

i just tried it on my fresh vps with 6gb ram and same fail so donth think memory is the problem unless you run out :slight_smile:

think that kali went wrong on the upload as i cant download it on anything but im not worrying to much as that 6.4 Gb ubuntu image has been going up and down like a little pink piston :slight_smile:

lets see how it gets on on the next test net.

3 Likes

also iv had to sign the faucet offenders register im ready for hyper inflation

🔗 Connected to the Network                                                                                                                                                                                          Requesting token for wallet address: 9711da41df76a59ff1789805e2b42c9718bf3ffc8ea7608f55a8e1673e5f59b93a89090cd25a6f28d550095711bae6ee...
Successfully parsed transfer.
Verifying transfer with the Network...
Successfully verified transfer.
Successfully stored cash_note to wallet dir.
Old balance: 31798.924070228
New balance: 31898.911219206
Successfully got tokens from faucet.

Has anyone had this happen? last chunk of a 9613 chunk upload, keeps failing.

Edit: nearly 3 hours later and the same chunk is still trying.
Edit2: killed after nearly 6 hours.

7 Likes

think i had something on a loop two nights ago for last 3 chunks but after a while it sorted itself out.

i went to bed and the morning it had uploaded so not sure how long that went on for

3 Likes