I’m not concerned about the lottery, but how well the network is going to survive in the wild. The design at the moment rewards cramming as many nodes as you can on your resources with as little empty margin as possible.
But in discussions the margin has been deemed important for the network resilience in case of sudden outage of a portion of nodes. This is not enforced or rewarded.
Nodes that do not have enough storage will be shunned when they have no more room anyhow. So no need for other incentives or disincentives which only go to increase complexity and remove the “bring what you have and run them when you have computer on”
To pay more for longer on nodes disincentives ordinary home users who do not run their computer 100% of the time and windows users with regular updates restarting the computer. Thus the datacentre style setup will dominate and we lose most of our nodes runners who get a fraction of the earnings for each node.
The ordinary home user hopefully will become the majority of the node runners by a factor of hundreds to thousands longer term. To favour development for data centre style of node running is to long term reduce the decentralisation to the large setups.
We need to favour development for home users without negatively affecting larger setups. The priority is to be for home users who have small setup, potato routers, average computer setups and internet connections.
Not sure what leads you to this statement… what do you interpret now into this poll? (where we don’t have a clue who did answer how and why…)
Even if we trust those numbers this would mean 4 out of 9 people above 1k are running with 30gb+ per node (from those above 5k it’s 100%) - and those 2 with <5gb per node will peal off very early on when the network fills… Same with the other categories… So where’s the damage?
Yeah, you are right, it was too bold statement with that sort of data. There’s not enough data to make any conclusions.
I don’t know what data was used to make this conclusion below, but it would be good to know. Otherwise these sort of claims become much too easy to refute, or at least cast reasonable uncertainty making the project look dishonest.
Okay and I’m officially out of the disk space discussion now - if maidsafe feels the need to care about this “issue” it’s their decision. I for sure wouldn’t and think it does change exactly nothing either way (at least for my world it doesn’t change anything…and I would rather see them invest time into stuff that matters…e.g.network performance / api )
The discussion was simply to remove the CPU threshold code that stopped antnodes, so to not have it. It didn’t feel natural after a lengthy debate. Folks are reading too much into all this…
As to the impact it will have whether its on an over provisioned system of nodes, sudden burst in the network that lead to sharp rise or fall off nodes, etc etc, time will tell how this all plays out… even though the CPU threshold was put in the past due to certain crashes in the past testnets due to over provisioned nodes (large amount)… at the end of day, it was a simple solution but still was decided it wasn’t the right solution so it was removed.
No magic answers or solutions at this time to a problem that’s not clearly defined either (in my opinion).
I think once the network passes out of the tiny network stage this issue will be less of a problem. Nodes will have a lot more responsible records and also prob more chunks they hang onto simply because there are more chunks close to the node’s ID.
There will be a sort of momentum to this in that it’ll take a lot more events to cause any more than a ripple. Much larger outages, no one large whale going off line will affect things, and so on.
Under-provisioning a node of storage space will not work as well as they will be closer to the “desired” 32GB of storage and the %age of under-provisioning will be much smaller.
My opinion is that the messaging and advice any managing program, and/or advertising, and/or documentation/install-guide gives has to account for what is expected of the network long term. 32GB seems to be the current thinking of the space our 64GB (max size), 16K records nodes should have available.
That way the less technical population will be able to set & forget using the advice and not worry.
that was a large network a month ago now it gets added in less than a day. IDK just odd to me, hardly like the world suddenly discovered us, or did I miss Elon send the tweet?