I use a dynamic script which starts nodes to use all allocated resources. So, if there is enough cpu, ram, bamdwidth and disk space, it will start more nodes. It does the reverse when any resources are running low.
I made a less sophisticated script for starting nodes (e.g. checking the load average and waiting for it to go down).
I get the impression that there isn’t much penalty to take nodes offline (permanently), even though perhaps there ought to be. One thing that concerns me about Autonomi is what happened during the testnet and the potential for another chain reaction of overprovisioned nodes causing others to have to pick up the slack (and contributing to overload). I had a dedicated bare metal server with many dozens of nodes that crashed due to stampedes - CPU was the bottleneck (I tried to have the average usage be around 40% CPU).
For my home computer, disk space and CPU isn’t the bottleneck - network is. I’d like to keep my upload bandwidth utilization at around 1/3 of capacity. But for now the number of nodes out there (over 2 million now) are growing much faster than the amount of uploaded data, so people who aren’t keeping up with provisioning more nodes are getting fewer rewards by the day.
Maybe that’s because there is no data? You cannot be punished because of not delivering the data, if there is nothing to deliver to begin with?
EDIT: But I also noticed this during the testnet. I could switch my nodes off for more than 12 hours, and when I switched them back on, they just continued like nothing had happened. No shunning or anything.
I don’t think that’s such a bad thing, if it happens now. But should not happen, when the network is considered stable.
This is one of the key reasons my node runner script stops nodes when resources are low. It uses the principle of last in, first out.
In short, my boxes can provide maximum available resources at any time, but not more than that. As we can’t predict network loads etc, this seems a decent strategy.
This is used in energy supply all the time, its called (energy) peak shaving, with what they call spinning reserve contracts to bring on line ‘peaker’ plants in a timely fashion, its what the Independent Power Producers do , selling Power Purchase Agreements , so this is really storage demand peak shaving .
The rhetorical question is
“how does one accurately determine when the Autonomi Network available resources are low in aggregate, that also takes into account the close group resource levels and the local system resource levels ?”
Likely answer is: " a lot of observable state metrics collected in real time from multiple available resource sources, processed against a set of rules that triggers the appropriate action, in some time adjusted sampling window… (epoch)
One thing there has been some work done on. One PR already to fix a bad comparison operator. It is certainly not desired behaviour.
But while nodes restart with same peerID, then perhaps its desired. It is supposed to be changed to new peerID on restarts of a node in a future update when they have time.
I doubt 50% of the network disappearing now will cause much trouble. Not enough data stored to cause much churning to occur.