Scrabbling back to on-topic, can someone kind soul give me a couple of known good SAFE_PEERS , please?
Failing to get the latest node-launchpad working has made me somewhat irritated.
No half as irritated as the crew of the Starliner, yet another scrub… week delay
Launchpad disappointment is widespread.
My hunch is that the XOR space is so large, that random distribution would always lead to big differences in density between the most dense and emptier areas… Any maths @neo?
Actually, could it even get worse with size? With infinite space and random distribution it certainly would.
Its a random distribution of the nodes. In ideal randomness this means statistically equal distances between each node. In real world it will be a bell curve if you plot the distance to nearest node for each node with by far most having approx the same XOR distance. This is because there was enough random events to get to “perfect” randomness. What I am saying that the more nodes there are the closer it will be all nodes being equal distance to each other in xor space.
vdash only shows records after the node provides a quote since starting vdash (or the current log has one when starting vdash)
Also I found that for port-forwarding that when I forgot to allow the ports though the firewall the nodes would not give quotes thus did not earn nor store new chunks. This is only for port-forwarding.
What’s your view on the store cost differences when the network grows? Are we going to see more or less nodes with outlier prices as the network grows? Or maybe the size doesn’t matter?
Running 16 nodes on home-network for a week now, good earnings, i had to remove and add 4 new nodes because old ones went bad. The interesting thing is that my home notebook with 16 nodes now is earning more than vps with 80 of them.
The outliers we see now are likely mostly due to problems we currently see with relays etc and they are either being fixed or will be fixed during beta. You know interactions etc.
Then the storage use for all the nodes will likely be similar to the bell curve for node xor distance apart I mentioned above. Some will have more chunks than their neighbours because it happens to be in a more isolated xor area and has to store more chunks (which also are a random distribution)
81 million full token per chunk (not nanos… 81 million 10^9 nanos)
… if the network would kick out nodes in places of very low cost (or just on a random basis @qi_ma ?) in such a case (and therefore make them join the network at places of probably higher cost and better earnings) this might prevent the network becoming stalled in an unwanted state … or we just hope for the effect of large numbers … but when some country suddenly disconnects this might still happen … it just needs bad luck and a local effect … that hopefully recoveres again … but uploads are more or less impossible with such prices for certain chunks i guess -.-" …
… but i added nodes too aggressively nonetheless … i need larger safety margins for load peaks that happen (limit already was at 80% average load … didn’t expect this to be too little) … i kicked one server completely out of business … a pity but i’m afraid i need to step back a bit on node count
ps: sorry @joshuef … but i thought it makes more sense to run node resource requirement and control tests on this network than on beta-rewards-net … i’m trying to not be too harsh with it …
Ten cloud nodes after 34 hours. One has a ridiculously large storage cost 308 SNT/chunk!
It is the one with most records but that still seems too steep a curve - @qi_ma? Another has earned and but is still showing a store cost of zero, which is probably something not picked up from the logfile but odd.
I can top that! Looking at Vdash for my 40 nodes there are a couple with a ridonculously large StoreCost: 15010 and 240834.
When I stopped 20 of them the night before last because of suspicions from some quarters in this household that I’d ‘broken the internet’ I just stopped nodes 21 to 40.
The 15010 one is in the first 20 so carried on running but the 240834 one is in the 2nd 20 so didn’t.
Now I think about it - it would be healthier for the network and potentially more lucractive to stop nodes with low StoreCosts and definitely leave ones with high StoreCost running.
Problem is that info is not available via the rpc api (last time I checked) … Store costs just sometimes appear in the logs when a quote is being requested… So for long periods of time it’s not possible to know the current storage cost of your nodes…
That is way down the list of things to be done at the moment along with getting info on record (chunk) storage, namely the active and inactive records. Who knows when we’ll get them. Maybe a community member versed in the node s/w and rpcs can make those available in a PR
When my machine rebooted, I suspect it defaulted to no interval too. I havent checked the logs, but a sensible default could be handy or a way to change it.
with the current formular, once records exceeds 1500, the curve starts become very steep.
it does could be too steep, and it is being discussed.
the min cost to give is 10, and zero is given only for existing records .
then statistic shows the node earned 22, but gives 0 quote, which I’d say could be due to the log rotation in high chance ?
that will be quite risky can could be abused easily.
it may bring in more trouble than benefit.
I’d expect larger number of nodes will overcome this eventually.
My thought is that clients will be tweaked by MS or others to be choosy and avoid those high storecosts, so people will be incentivised to shut them down and restart.
That will be bad for the network and will I think be hard to mitigate.
It isn’t due to log rotation. That would require me to have restarted vdash and that hasn’t happened.
vdash shows zero store cost until it has a figure, so this means that despite the node having earned, vdash hasn’t seen the log message with store cost in it. One explanation would be that it got paid without quoting.
EDIT: @qi_ma well this is odd. I grepped the logs for StoreCost and that node has had a range of StoreCosts. But the most recent log message has it as zero: