RewardNet [04/09/23 Testnet] [Offline]

So when the network needs more storage then it must only offer higher rates to new nodes? (Assuming something is not busted) That’s raising behavioral questions I haven’t considered before :thinking:

Would you happen to know if these are new nodes or the most recent of nodes you spun up?

I started all of them last night so they are all “new”.

I am not sure that it is a problem, I think/suspect being the node of the hour may be cyclical.
Surely they can’t all be getting new puts at the same time so some do good today, some do good tomorrow etc.
Do you see a flaw in that assumption, but it seems a bit uneven nonetheless.

2 Likes

No not particularly. I was just kind of thinking if you offer up higher rates to new nodes and not all nodes storing (which may not even be the case, simply that I’ve never thought of it before from this angle) that older nodes could maybe want to drop old data for new.

Obviously there is probably some punishment to prevent this and also someday archive noes will probably be some of the most handsomely paid. So not really a concern just had the assumption that the higher reward rate would be for all data at that moment in time, not just new data on new nodes.

Complete block of text but maybe you get my train of thought?

2 Likes

The way I understand it all nodes new and old hold old data. New nodes don’t get rewards for taking on old (already paid for data)
So new or old “node age” shouldn’t affect rewards, I think.

1 Like

Well when the reward increases it’s to encourage new storage for new data but when that happens does the rate go up across the board when there is also a GET?

See, I think my ‘farming’ (outdated term) economics aren’t up to snuff.

So are nodes paid on GET’s anymore for maintaining data?

Nodes are paid at time of storage by the uploader doing the PUT, I believe I have this right.

So maybe I’m not delineating between GET reward (do we still do that?) of old data compared to higher rewards to incentive storage space of new data.

If I’m just being confusing, feel free to ignore me :joy:
I’ll catch up eventually. Thanks Josh

2 Likes

Did they ever? Hmmm. Where is @JPL and that primer :grin:

1 Like

He’s waiting for David to complete a sentence in another thread :smile:

Yeah, we were supposed to at several different times probably several years ago. We been here awhile. I was starting to think it was purgatory but I think heavenly gifts await us now.

Just need that Primer!

1 Like

I’ve had a couple of failures of my uploading of random files. It looks like it’s a failure of the payment or getting a price because when it fails I get:-

Making payment for 41 Chunks that belong to 11 file/s.

but then not:-

Transfers applied locally

It looks like the cost per record is going up and up as well as reported above.

I’ve started another Instance with 50 safenodes (there were only 27 of the original 50 still running on my other Instance because of the machine running out of RAM). I’m getting an error on the console that I’ve not seen before:-

dbc exists

Has anyone seen that before?

But they are getting lots of records. Hundreds after only a few minutes. But no rewards so far which I think will be because they are getting replication chunks from all the nodes that have stopped. So I think the network is struggling with all the replication going on because of all the nodes that have stopped due to the memory leak and that might explain the upload failures I’m getting and maybe that error above.

I’ve searched through the logs and I can’t find a mention of the string ‘dbc exists’.

3 Likes

They are. And good job with the graphs @Josh! :clap:

Here is my node for comparison. No news really, other than it has been working from home behind NAT without any problems like, what, 9 days?

Timestamp: su syys 10 14:08:27 EDT 2023
Number: 1
Node: 12D3KooWCXJYagGXfHcv3jcxZmq8k8CLUDZ6qaQyreV5JkuQnAvC
PID: 7036
Memory used: 157.641MB
CPU usage: 4.6%
File descriptors: 2144
Records: 486
Disk usage: 159MB
Rewards balance: 0.000001824

10 Likes

Never in a test, but it was the early concept that lasted for a long time. Early it was considered that pay per get was the way to go since that more directly covered a node’s uptime and bandwidth costs as small as they are. This was part of the “network” paying and uploaders paid the “network”

Recently (over dev lifespan) we changed to direct payment of nodes and not paying the network when uploading. This involved paying both the node and the foundation reserve.

While the idea of uploaders paying the “network” and the “network” paying the node that got the chunk had very nice benefits for distributing the remaining 70% and buffering highs and lows in uploading and downloading it is really quite complex compared to paying the nodes directly.

7 Likes

iv got a lottery winner as well :slight_smile: but another node beside it has not been so lucky :frowning:

------------------------------------------
Timestamp: Mon 11 Sep 00:20:58 EDT 2023
Node: 12D3KooWQvNXyWqf1VuoTXQGgfc2ipzLmeFpkmopHY97dXZj3qrJ
PID: 2345
Memory used:
CPU usage:
ls: cannot access '/proc/2345/fd/': No such file or directory
File descriptors: 0
IO operations:
cat: /proc/2345/io: No such file or directory
ls: cannot access '/proc/2345/task/': No such file or directory
Threads: 0
Records: 1971
Disk usage: 651MB

Node wallet balance  0.016810944

------------------------------------------
Timestamp: Mon 11 Sep 00:20:58 EDT 2023
Node: 12D3KooWPvd5qv9mcn4AXoB2CsfRgCX9FZKkCPiFRijy3RjFSqLH
PID: 2380
Memory used:
CPU usage:
ls: cannot access '/proc/2380/fd/': No such file or directory
File descriptors: 0
IO operations:
cat: /proc/2380/io: No such file or directory
ls: cannot access '/proc/2380/task/': No such file or directory
Threads: 0
Records: 152
Disk usage: 50MB

Node wallet balance  0.000000028

------------------------------------------

3 Likes

I see your 0.016810944 and I can raise you 0.030408704. It has 1991 records so only has a few more than yours but maybe a higher proportion of the 1971 records on yours are non paying replication records.

This node has only been running since about 1600 yesterday when I started 50 new ones on a new Instance but has a way higher balance than any of my nodes of the original 50 started a week ago (of which only 24 are still running).

I have a feeling there will always be comparative ‘lottery winner’ nodes but the degree will lessen as the network increases in size. Unless someone can figure out a way to game the system.

6 Likes

Excellent. Interesting stuff!

That one is sooo far in front is interesting. They’re clearly fuller and so charging more. Therefore would get more… But they are almost full… QQ: is the record count there taken from a disk read? I’m wondering if our calculation is off and we’re not actually filtering for relevant records perhaps :thinking:

FWIF: I don’t think it’s necessarily about one performing better than another here. Just one storing more.

(ie. If it is only relevant records, then they may just be alone in a big xorspace. If the algo bugs, they may just have more records for having been moved about more, eg)

The Qs to me are:

  • is our store cost algo definitely removing irrelevant data?
  • if so… why is this one so full!?

There was a nice post on hacker news a few weeks ago about how a random sample reallly will give you a bell curve. It seems counter intuitive, but if it is random, then it makes sense that some nodes would get more than others. Here’s the link

(I’m not saying that doesn’t preclude a bug, mind).

Indeed, though as with the above, random doesn’t necessarily mean uniform!

8 Likes

That’s why I propose to limit randomness.
Please read rest of my message too.

How do you propose to do this in the SAFE network and why that approach of limiting randomness? Remember, we don’t have network wide consensus or total order (and neither should we)

2 Likes

Back years ago when sections was all the thing a couple of us showed that random distribution of nodes across sections would see some sections with all nodes from the same region. This was concerning outages. Basically this is the bell curve of random distribution in action.

Of course there is the randomness of the records being stored. The addresses of the records is not controlled by the network or nodes but the files the user uploads and this will not be absolutely a even distribution. Maybe close

1 Like

By generating several keys and selecting “best” one.

To even out load on nodes.
(Perfect results still won’t be possible, but I expect them to be better than without “key mining”)

upd. This approach is similar to restarting node until you get better spot… but without restarting and wasting resources.

3 Likes

Do you mean targetting a group? (that will very likely be made computationally infeasible in future updates with the sybil resistance measures).

5 Likes

Many implementations are possible with the same goal: lowering chance of having keys too close to each other.
It does not matter much where exactly node finally lands, but such place should be at least somewhat better than completely random choice.
(I don’t know how groups etc. exactly works in SN, so I’m telling just general idea)