ClientImprovementNet [22/09/23 Testnet] [Offline]

:thinking:

Hmmm, maybe “ClientImprovement” was about testing the ingenuity of use test clients rather than the code!

My node is still waiting for its first payment. Come on folks, get uploading!

2 Likes

try this
safe files download main.zip 48530269f54c46dc11b5f74717604c24bd5b1980da997e9e794c4e74dd82134f

Unzip it and get @Josh’s Rewards Plotting + some extras - WIP YMMV

4 Likes

I seem to be getting payments as my wallet balance is going up but they aren’t showing in vdash. But vdash is shouting records arriving.

3 Likes

Victory is MINE!

Node's rewards wallet balance (PeerId: 12D3KooWKrSZSMU8d2GqiZue4yZRTieNFhV4FXuthvLWpgF6DMfB): 0.000000044

5 Likes

Went for it, I Think that I am up and running but I am in the rear end of Alabama where the internet still wears a mullet so I am mostly guessing right now. Time will tell.

6 Likes

Dang! Although from @storage_guy’s report it looks like it’s really a vdash bug. Never trust your own code until you’ve tested it.

Expect an update once I’ve finished my coffee :grimacing:

7 Likes

Second batch ended in failure [ -c 20 --batch-size 20 ]

After 579.027326138s, verified 233 chunks
======= Verification: 190 chunks were not stored. Repaying them in batches. =============
Failed to fetch 20 chunks. Attempting to repay them.

and then

Error: 
   0: Chunks error Failed to get find payment for record: d3af8b2ed8770d475e5d2376adc72afb64b26245c164803963270736776b4b26.
   1: Failed to get find payment for record: d3af8b2ed8770d475e5d2376adc72afb64b26245c164803963270736776b4b26

Location:
   sn_cli/src/subcommands/files.rs:297

Will try again with more conservative -c 50 --batch-size 10

Only because I wasn’t at home, I’m afraid… I hope to get the richest node on the network now that I have had a good head start.

Node's rewards wallet balance (PeerId: 12D3KooWFVGoBRJJoR57MwnLwPH6kdLvtbAVPrNnzsVwV2pQrTCm): 0.000000537
5 Likes

Ouch. That hurts. Your majesty.

4 Likes

So what was the role of the SAFE_PEERS, meaning what difference is it supposed to make if I use one or the other? And was there a posiibility for traffic jam if everyone uses the same one?

There were hints about that yesterday which sounded plausible
I am using one of the bottom entries on the list in the OP.

@dirvine replied in another thread about setting SAFE_PEERS soon to be redundant as the client will grab a value for that automatically.
Presumably there will eventually be a method to pick a random valid value for SAFE_PEERS that will be invisible to Joe User.

5 Likes

It’s just a known network node to get some other peers from. Ideally, we should have 20 or so of them up and working solidly. Clients should randomly choose one or even ask their pals for an address etc.

6 Likes

Is there a way to set SAFE_PEERS to an array of valid values?

This is bugging me

"Could not get enough peers (8)…

3 Likes

I’ve updated vdash to v0.8.12 which has minor UI tweaks and fixes the missing Earnings.

If you re-start vdash you’ll lose some earlier data but see everything that’s in the latest log and anything new. More in this topic: Vdash - Node dashboard for Safe Network - #105 by happybeing

Edit: if you restart your node (do rm -r ~/.local/share/safe/node first) and then restart vdash it’s pretty sweet as you’ll see a pile of PUTS arriving and eventually costs and earnings and your totals will be accurate.

TIPS:

  • press ‘t’ and ‘T’ to cycle forward and back through the timelines.
  • press ‘o’ and ‘i’ to zoom the time scale out and back in.
9 Likes

I’m getting this on all of the 10 nodes I have running on Hetzner

2 Likes

One of my nodes got killed after receiving 20 records.
Anyone else got the boot?

3 Likes

Given that modern home routers tend to favor/promote some packets over others - e.g. video streams … can SAFE packet get any prioritization as well? I don’t know how that all works, so just curious. Also when we go to hole punching and UDP? How might that affect things?


My second batch of files succeeded after several auto-retries [ -c 50 --batch-size 10 ]

Given the large number of chunk upload failures though and given my repeated successful upload without retries at a lower batch-size setting in the previous testnet, I think a batch-size of 10 is still too large for my upload speed/bandwidth.

Next will do a large file with [ -c 50 --batch-size 5 ] - my aim is to upload without auto retries - get it uploaded the first go.

1 Like

What node version are you using?

safe@ClientImprovementNet-Southside01:~/.local/share/safe$ safenode -V
safenode cli 0.90.33

You need v0.90.34

1 Like