So when the network needs more storage then it must only offer higher rates to new nodes? (Assuming something is not busted) That’s raising behavioral questions I haven’t considered before
Would you happen to know if these are new nodes or the most recent of nodes you spun up?
I started all of them last night so they are all “new”.
I am not sure that it is a problem, I think/suspect being the node of the hour may be cyclical.
Surely they can’t all be getting new puts at the same time so some do good today, some do good tomorrow etc.
Do you see a flaw in that assumption, but it seems a bit uneven nonetheless.
No not particularly. I was just kind of thinking if you offer up higher rates to new nodes and not all nodes storing (which may not even be the case, simply that I’ve never thought of it before from this angle) that older nodes could maybe want to drop old data for new.
Obviously there is probably some punishment to prevent this and also someday archive noes will probably be some of the most handsomely paid. So not really a concern just had the assumption that the higher reward rate would be for all data at that moment in time, not just new data on new nodes.
Complete block of text but maybe you get my train of thought?
The way I understand it all nodes new and old hold old data. New nodes don’t get rewards for taking on old (already paid for data)
So new or old “node age” shouldn’t affect rewards, I think.
Well when the reward increases it’s to encourage new storage for new data but when that happens does the rate go up across the board when there is also a GET?
See, I think my ‘farming’ (outdated term) economics aren’t up to snuff.
So are nodes paid on GET’s anymore for maintaining data?
Nodes are paid at time of storage by the uploader doing the PUT, I believe I have this right.
So maybe I’m not delineating between GET reward (do we still do that?) of old data compared to higher rewards to incentive storage space of new data.
If I’m just being confusing, feel free to ignore me
I’ll catch up eventually. Thanks Josh
He’s waiting for David to complete a sentence in another thread
Yeah, we were supposed to at several different times probably several years ago. We been here awhile. I was starting to think it was purgatory but I think heavenly gifts await us now.
I’ve had a couple of failures of my uploading of random files. It looks like it’s a failure of the payment or getting a price because when it fails I get:-
Making payment for 41 Chunks that belong to 11 file/s.
but then not:-
Transfers applied locally
It looks like the cost per record is going up and up as well as reported above.
I’ve started another Instance with 50 safenodes (there were only 27 of the original 50 still running on my other Instance because of the machine running out of RAM). I’m getting an error on the console that I’ve not seen before:-
dbc exists
Has anyone seen that before?
But they are getting lots of records. Hundreds after only a few minutes. But no rewards so far which I think will be because they are getting replication chunks from all the nodes that have stopped. So I think the network is struggling with all the replication going on because of all the nodes that have stopped due to the memory leak and that might explain the upload failures I’m getting and maybe that error above.
I’ve searched through the logs and I can’t find a mention of the string ‘dbc exists’.
Never in a test, but it was the early concept that lasted for a long time. Early it was considered that pay per get was the way to go since that more directly covered a node’s uptime and bandwidth costs as small as they are. This was part of the “network” paying and uploaders paid the “network”
Recently (over dev lifespan) we changed to direct payment of nodes and not paying the network when uploading. This involved paying both the node and the foundation reserve.
While the idea of uploaders paying the “network” and the “network” paying the node that got the chunk had very nice benefits for distributing the remaining 70% and buffering highs and lows in uploading and downloading it is really quite complex compared to paying the nodes directly.
I see your 0.016810944 and I can raise you 0.030408704. It has 1991 records so only has a few more than yours but maybe a higher proportion of the 1971 records on yours are non paying replication records.
This node has only been running since about 1600 yesterday when I started 50 new ones on a new Instance but has a way higher balance than any of my nodes of the original 50 started a week ago (of which only 24 are still running).
I have a feeling there will always be comparative ‘lottery winner’ nodes but the degree will lessen as the network increases in size. Unless someone can figure out a way to game the system.
That one is sooo far in front is interesting. They’re clearly fuller and so charging more. Therefore would get more… But they are almost full… QQ: is the record count there taken from a disk read? I’m wondering if our calculation is off and we’re not actually filtering for relevant records perhaps
FWIF: I don’t think it’s necessarily about one performing better than another here. Just one storing more.
(ie. If it is only relevant records, then they may just be alone in a big xorspace. If the algo bugs, they may just have more records for having been moved about more, eg)
The Qs to me are:
is our store cost algo definitely removing irrelevant data?
if so… why is this one so full!?
There was a nice post on hacker news a few weeks ago about how a random sample reallly will give you a bell curve. It seems counter intuitive, but if it is random, then it makes sense that some nodes would get more than others. Here’s the link
(I’m not saying that doesn’t preclude a bug, mind).
Indeed, though as with the above, random doesn’t necessarily mean uniform!
How do you propose to do this in the SAFE network and why that approach of limiting randomness? Remember, we don’t have network wide consensus or total order (and neither should we)
Back years ago when sections was all the thing a couple of us showed that random distribution of nodes across sections would see some sections with all nodes from the same region. This was concerning outages. Basically this is the bell curve of random distribution in action.
Of course there is the randomness of the records being stored. The addresses of the records is not controlled by the network or nodes but the files the user uploads and this will not be absolutely a even distribution. Maybe close
Many implementations are possible with the same goal: lowering chance of having keys too close to each other.
It does not matter much where exactly node finally lands, but such place should be at least somewhat better than completely random choice. (I don’t know how groups etc. exactly works in SN, so I’m telling just general idea)