Well this is a double edged sword to use 64GB in calculations for storage capacity.
While technically true, practically if one node is at 50GB then there will be for sure one or some outlier nodes at 64GB (ie totally full) and a failing network in some regards. Not failed but failing although recoverable.
The 32GB arose out of the times prior to ERC20 where on average 1/2 the chunks were tiny holding the chunk store transaction. Thus 16K records at max 4MB would only be 32GB.
That was considered the average size a node could get to.
Now married with the charging algo from long ago continuing with charging near max amount at 60% records stored, and I see no reason why that wasn’t baked into the smart contract too with an adjustment being made to factor in the market price. Basically if the market price of the token remains the same the price will be near maximum at around the 60% mark.
This now means that at 32GB, the desired size of nodes, the price will be about half way to max. A sweet zone. Go much higher and the price jumps and in theory people will be incentivised to add more nodes, or start running nodes.
Thus for better honesty, in this one fact, 32GB (storage) as the size of a node is better figure than 64GB.
It is a risk if people try to run nodes at 32GB and use 95% of their disk. Yes, but hopefully people and the OS will be warning them long before they get to 95%. Maybe launchpad should be keeping an eye on that too.
@Toivo can I suggest you start a new topic for a anonymous poll on this. Maybe ranges like less than 5GB, 5 to less than 10GB, 10 to less than 15GB and so on.
Make sure you tell people to ensure they do not include the 20% free space on disk they must always keep free for the FS and OS to utilise
eg disk size * 0.8 less used space all divided by number of nodes.
EG a 2TB drive with 200GB already used and 100 nodes
space per node == ( (2000 * 0.8) - 200 ) / 100 = 14GB per node allowed for