Looking for feedback on whether this scenario is feasible and / or likely.
Some background info:
Currently (based on rfc0057) new nodes are only allowed when there are more than 50% full nodes [1]. If less than 50% are full the existing nodes keep getting filled until the 50% full threshold is crossed. Maybe the 50% number will change, maybe the disallow rule will change, but the idea of farming pools is still valid if there is any situation where new farming nodes are not allowed to join the network.
Hereās the farming pool idea:
I try to join the network but my node is not needed so is disallowed. I really want to start farming, and I come across something called a āfarming poolā, a service offered by an existing node on the network. The node allows me to join their pool right now and theyāll give some chunks to look after and a proportional share in the rewards (they take a small fee but Iād prefer a small fee compared to not farming at all). Great, I can start farming right now! I join the pool and in the background Iāll keep trying to join the network as a real farming node. This pool lets me earn rewards while I wait.
This makes the pool operator node appear to the network as a very large node (the network has no idea itās a pooled resource). Ironically, the pool participant has reduced their chance of being able to take part as a ārealā farmer because now itās become even harder to cross the 50% full nodes threshold.
Does this sound like a feasible situation? Would it be a problem?
A second similar thing is if datacenters start taking part they could do the same thing, appearing as one massive node (or more likely a maximally-viable number of not-full nodes). Thereās incentive to do this since full nodes earn less reward (rfc0057 says reward is halved for full nodes [2]), so two full nodes are rewarded less than one not-full node. The incentive to not be full is very strong.
I feel like using full nodes as the measure for when and how to take action is potentially dangerous to the health of the network.
Just brainstorming here, would be interested in other views on this.
I feel like rather than using full nodes as the measure, we could use degree-of-redundancy as the way to decide network actions (like the sacrificial chunks idea in rfc0012 but a little more flexible). There can be a fixed minimum redundancy but any amount of extra redundancy that floats above that can be used to measure how excessive or how stressed resources are.
For example, enforce a minimum of 8 redundant chunks. If measured redundancy is 20 thereās a lot of spare resources and the network can maybe start rewarding less to weed out inefficient resources. If measured redundancy is 8 the network keeps the reward where it is. If measured redundancy is 7 the network takes immediate action to bring it back up to 8 by increasing the amount of reward.
Allowing a floating amount of redundancy means there is no disallow rule, any node can start farming at any time (onboarding may take time due to bandwidth constraints, but there is never a disallowed node). The lack of disallow rule means thereās no incentive to have pooled farming, all nodes may as well join the real network.
The disallow rule is seen as a necessary security mechanism to reduce the chance of the network being flooded with new nodes. But I feel the disallow rule also has other side effects which are potentially quite dangerous (eg farming pools). Is the disallow rule a net positive? Tough questionā¦
[1] rfc0057 āif that ratio of good nodes drops or remains below 50%, vaults will ask routing to add a new node to the section.ā
[2] rfc0057 āif flagged as full { nodeās age/2 }ā. Age is used to weight the reward, so the effect of halving the age is not exactly to halve the portion of reward, but is close to that.