RewardNet [04/09/23 Testnet] [Offline]

Yes disk ls -1 | wc -l, I don`t use the logs in hopes that we can soon turn them way down.

My record count is also simply just ls and wc

Here’s a question that might provide a route into this: does one of these winning nodes win consistently over time or does it have a big ramp up in non replication records being stored? If so, does it then go back to a normal growth rate?

I don’t have the metrics for that :frowning_face:

So far the same two of mine are winning. As Joshuef noted they are very nearly full. ~2000 records each so…

Top 10 earners descending.

 Number: 22
Node: 12D3KooWMKdayF4JUcc3uuwTprLjWeDSjro6XPsV15d2wFxawuvu
PID: 4148
Rewards balance: 0.04721954
---------
Number: 13
Node: 12D3KooWFKj21JevjpBHdaJHS7tYy8VoXWgmwB6XLXCZckargvbz
PID: 31411
Rewards balance: 0.04042011
---------
Number: 21
Node: 12D3KooWG1FAFjnMZsXyKirjq9ZZMCMFFfyczMjHrfmMt34ZukdF
PID: 4188
Rewards balance: 0.00040237
---------
Number: 6
Node: 12D3KooWPvPTVNUUHYCZXuL9yV2nJXQYxJxJTDEQvAsct7auaeUT
PID: 4180
Rewards balance: 0.00011898
---------
Number: 30
Node: 12D3KooWL1jbwhc6NrdHPDtMF8CXiyod7zwikByXm8F9iPofVjLp
PID: 2990
Rewards balance: 0.00007618
---------
Number: 15
Node: 12D3KooWK11hrQheHGMo7WczkHRu461voXpasdqRgLV5k6B9WLkE
PID: 31487
Rewards balance: 0.00002251
---------
Number: 10
Node: 12D3KooWLUG1Cm4hBF2RKJ4cy8g13QEv6ouLxukbEEZkkaLYAei9
PID: 31495
Rewards balance: 0.00000936
---------
Number: 12
Node: 12D3KooWC4GTX31hsLgo6cThDkXhEd1arhmE3Xo8qeUkUDwt4RFJ
PID: 4212
Rewards balance: 0.00000218
---------
Number: 19
Node: 12D3KooWEZ9H8h5HhtffRGgwwtpJgroqgxc6AevfffHVk9M3vfHw
PID: 31529
Rewards balance: 0.00000128
---------
Number: 16
Node: 12D3KooWNJiXPHfgtiYKbXnGgg1ayPq6HWTFFDZxzrLszZMiQrpW
PID: 2982
Rewards balance: 0.00000128
---------

This is what it looks like, 2 vs the other 91.

5 Likes

Amazing! Thanks for that. I was thinking there would have been a big jump up and then more normal growth. But that high growth looks sustained and is normal for them.

Could there be a bug in their calculation of how cheap they are to store on and they are selected when they shouldn’t be?

Or is there lots of data somehow being stored that happens to be in a hasg range they are in so all the nodes in that group will be winning?

1 Like

Yeh I’m wondering about the relevant record calc myself.

@Josh Can you have a grep for Relevant records len is on those two nodes. (With that many records I’d expect it to differ from the actual file count I think)

This is possible, but to such a high degree it’s worth checking for sure.


This does make me wonder if a nice simple way for this (and one incentivized by node rewards), is that on join (or about to) we query storecost at potential neighbours to see how dense things are. High prices mean we’d want to target there. (As much as any targetting may be possible… ie, maybe we see cheap prices so don’t bother going there). :brain: :cloud_with_lightning:

4 Likes
wyse1@wyse1:~/.local/share/safe/node/12D3KooWFKj21JevjpBHdaJHS7tYy8VoXWgmwB6XLXCZckargvbz/logs$ grep "Relevant records len is" safenode.log*
safenode.log:[2023-09-11T14:46:01.481543Z TRACE sn_networking::record_store] Relevant records len is 2006

-------------------------
wyse2@wyse2:~/.local/share/safe/node/12D3KooWMKdayF4JUcc3uuwTprLjWeDSjro6XPsV15d2wFxawuvu/logs$ grep "Relevant records len is" safenode.log*
safenode.log:[2023-09-11T14:46:01.487612Z TRACE sn_networking::record_store] Relevant records len is 2013

Local Timestamp: Mon Sep 11 10:40:06 EDT 2023
Global (UTC) Timestamp: Mon Sep 11 14:40:06 UTC 2023
Number: 13
Node: 12D3KooWFKj21JevjpBHdaJHS7tYy8VoXWgmwB6XLXCZckargvbz
PID: 31411
Status: running
Memory used: 134.875MB
CPU usage: 6.4%
File descriptors: 2299
Records: 2006
Disk usage: 657MB
Rewards balance: 0.056148752

----------------------------------

Local Timestamp: Mon Sep 11 10:40:10 EDT 2023
Global (UTC) Timestamp: Mon Sep 11 14:40:10 UTC 2023
Number: 22
Node: 12D3KooWMKdayF4JUcc3uuwTprLjWeDSjro6XPsV15d2wFxawuvu
PID: 4148
Status: running
Memory used: 122.359MB
CPU usage: 5.4%
File descriptors: 2105
Records: 2013
Disk usage: 659MB
Rewards balance: 0.070288208

If you look bottom right, we may have another getting ready to take flight. :slight_smile:

ps. the link I shared to this graph should update 15 minutes past every hour so I dont need to spam this thread with screenshots any longer should anyone wish to track what is happening.

@joshuef should the records_store count drop? as in if the data is no longer relevant for that node does it get removed from disk? I have not noticed anything like that.

4 Likes

Oookay, thanks for that! hmmm, 100% relevant records is… sus. (Also entirely possible!).

4 Likes

The little guy down in the corner is definitely picking up steam and starting to climb.

If it is not a bug causing this, I don’t think it is altogether bad.

Somewhat like bitcoin mining, throw as much as you can at it and hopefully win a reward.
Kind of…

Eliminates the desire to kill nodes that are not currently doing well because it just may be the next one to win.

4 Likes

When it comes to ensuring more even node distributions in xor space I’ve suggested “poisson disc sampling” a few times. I don’t think this means targeting a group, rather it would naturally control which nodes would be allowed to join a group/section at a particular instant. For example, if a number of nodes are waiting to join a section, the one that is selected would be the one that satisfies the poisson disk sampling rule. Likewise, nodes could be told to relocate if they no longer ensure proper poisdon disk sampling after churn events.

https://www.jasondavies.com/maps/random-points/

Just to dig in here. Say there are no queues and nodes just join at random.

Nodes can connect to other nodes or not (right now they always accept a node). Therefore, from your suggestion do you see a pattern/distribution that nodes individually can say, hey this new node close to me is not good, I am not connecting to it and won’t advertise it (effectively make it non existent)? Then all nodes in that vicinity do the same and the node is not advertised on the network and is effectively invisible and can do no harm.

5 Likes

Prices are going up…

Successfully made payment of 0.025727464 for 30 records. (At a cost per record of Token(25727464).)

nodeop@RewardNet02:~$ safe files upload Rewards_plotting/
Built with git version: 8faf662 / main / 8faf662
Instantiating a SAFE client...
🔗 Connected to the Network                                                                                                                           Preparing (chunking) files at 'Rewards_plotting/'...
Making payment for 56 Chunks that belong to 30 file/s.
Transfers applied locally
After 26.044182925s, All transfers made for total payment of Token(25727464) nano tokens. 
Successfully made payment of 0.025727464 for 30 records. (At a cost per record of Token(25727464).)
Successfully stored wallet with cached payment proofs, and new balance 99.974272536.
10 Likes

Sooo, What happens here. I have a bunch more of these full nodes. are they essentially out of the game until they churn?

wyse1@wyse1:~/.local/share/safe/node/12D3KooWFKj21JevjpBHdaJHS7tYy8VoXWgmwB6XLXCZckargvbz/logs$ grep "Relevant records len is" safenode.log*
safenode.log.20230912T042428:[2023-09-12T08:23:51.009821Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T042428:[2023-09-12T08:23:51.013813Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T042428:[2023-09-12T08:23:51.017812Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T042428:[2023-09-12T08:23:51.058332Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T042428:[2023-09-12T08:23:52.900744Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T042428:[2023-09-12T08:23:53.104677Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T042751:[2023-09-12T08:27:01.331327Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T043757:[2023-09-12T08:37:51.702582Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T044129:[2023-09-12T08:40:24.409136Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T050024:[2023-09-12T08:58:55.241845Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T050024:[2023-09-12T08:58:55.307941Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T051548:[2023-09-12T09:15:22.127764Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T051916:[2023-09-12T09:18:02.050180Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T051916:[2023-09-12T09:19:11.502784Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T052055:[2023-09-12T09:20:30.219666Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T052940:[2023-09-12T09:29:40.544191Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T052940:[2023-09-12T09:29:40.714744Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T053319:[2023-09-12T09:33:06.060790Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T053503:[2023-09-12T09:34:14.902843Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T055940:[2023-09-12T09:59:00.207203Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T061650:[2023-09-12T10:16:03.075003Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T061650:[2023-09-12T10:16:43.351966Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T061742:[2023-09-12T10:17:27.637705Z TRACE sn_networking::record_store] Relevant records len is 2048
safenode.log.20230912T064319:[2023-09-12T10:43:03.738428Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T064504:[2023-09-12T10:44:27.880494Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T064504:[2023-09-12T10:44:30.910224Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T064654:[2023-09-12T10:45:51.743992Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T064654:[2023-09-12T10:45:54.643507Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065020:[2023-09-12T10:49:39.105653Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065020:[2023-09-12T10:49:41.965111Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065207:[2023-09-12T10:51:04.719366Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065207:[2023-09-12T10:51:05.937482Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065348:[2023-09-12T10:52:28.591876Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065348:[2023-09-12T10:52:30.249266Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065450:[2023-09-12T10:53:56.580261Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065450:[2023-09-12T10:53:59.271582Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065624:[2023-09-12T10:55:21.530202Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065624:[2023-09-12T10:55:23.005788Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065624:[2023-09-12T10:55:45.424036Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065624:[2023-09-12T10:55:47.136415Z TRACE sn_networking::record_store] Relevant records len is 2047
safenode.log.20230912T065935:[2023-09-12T10:58:12.339161Z TRACE sn_networking::record_store] Relevant records len is 2047
3 Likes

I wonder who decides to store data in them despite the high price? Perhaps that is the problem? Is there an algorithm in the client that samples prices of random nodes and selects the cheapest storage, or just pays as much, as first contacted node demands?

8 Likes

I was wondering the same… and also wondering if for some reason a certain address range is expensive for a certain chunk, couldn’t self-encryption just add a nonce, re-hash that chunk to get a new address to upload to, hopefully with lower store cost?

3 Likes

You could do that in the client for sure.

This is the thing though, we need to see the network coping with fast filling nodes and on a small network this will happen if it runs long enough without extra nodes being added to give enough space. So really its better to see this behaviour and ensure the network copes with it as it maybe the only time we will see it.

In the live network though there is hoped that it is adopted which would mean there will be a lot of spare space and plenty of nodes relocating.

Another set of considerations are

  • The nonce has to be added to the file and self encryption run again, or just added to the record if not a file. This means the data is no longer the same and whomever reads the file is getting a corrupted file. The nonce cannot be added outside the file/record since the record is stored somewhere that is not its address.
  • Even if you could add the nonce without the above negative consequences then it is only delaying the point at which one has to pay the higher costs. As said above one hopes the network is adopted which means that nodes will be continually added by new people during the adoption process. Also as space is needed people are encouraged to add new/additional nodes to the network.

In summary if the modification to add a nonce could be done without negative consequences AND is needed then its only delaying the problem. We hope the live network is adopted by many and the issue will not be occurring since people add nodes as needed and receive reasonable compensation for their nodes.

4 Likes

yep!

In theory folk should be starting more nodes to get some of that $$ you’ve been earning. So there’d be churn then.

7 Likes

For us this testnet has been a good run. (Although we’ve lost some nodes to modest mem once more, ~350mb/node so our cramped machines cannot handle that. We’ll likely have them set up less densly next time, and we’ve some decent mem improvements coming along too).

We’ve seen some angles for improvement (and have a lot of them already in), and we’ve seen folk getting rewards (and more logs there to see if we have an issue or not).

I’ll likely shut this down soon enough as folks nodes are starting to fill up!

Thanks everyone for getting stuck in. :bowing_man: :muscle: :bowing_man:

22 Likes

I was not able to participate as fully as I would have liked. Eventually got 16 nodes running on a Hetzner instance last night and was able to see some earnings.
Like many others I noticed one node out of 16 was a very clear leader in earnings with approx 99% of total earnings.
Playing around with the graphing tools from @Josh and I hope we will soon have a one-click (or close) installer for the next testnet.

7 Likes

So you think… evil laughter… my node is still connected, albeit seems to bet getting full soon. Big spike in puts, when you pulled the plug.

4 Likes

What can we use for SAFE_PEERS to connect to @Toivo’s AlamoNet?

4 Likes