Nice question
This kind of problem is very related to having differing size vaults etc. we lose quorum. When we lose quorum (as with any decentralised system) then the system fails.
So we have a real good approach here I think in this community. Data Chains and xor consensus, two pretty powerful things. This essentially takes the problem of going below quorum and reduces the problem to - when at least one member of a group survives [echos of Robert the Bruce] . This is true in a great many cases.
So in SAFE we have a kademlia type network, but not with a refresh time (to agree data etc.) this is continuous in SAFE ( so a big difference from normal KAD and a hugely important one).
Then we have consensus based on xor distance from network addresses so no leader election paradigms like raft / paxos etc. Again different, our leader is mathematically chosen.
Then we have data chains where we can prove network data, but not only that, we have a note of data that should exist, even if we do not have it (important in segmentation/partitions etc.). So we cannot update this data in a broken network if the block already exists, regardless of us getting to the data (and remembering xor partitions are not linear they are non euclidean distances so a split happens across groups, not between groups).
So taking all this into account, I think we are probably well on the way to something that challenges long held beliefs, but I suppose we know that as a community we are innovating not mimicking
Given the above then having huge numbers of node for security/cleanup/burden removal of consensus decisions all becomes much more real and available. It increases availability, consistency and more.
As we iterate faster the core algorithms will appear from the complexity that exists today, then we can mathematically model much of this (remember it’s all probability based decision trees) and improve it even further to remove all magic numbers I think therefore that perhaps some “well known” or perceived thought experiments can bear fruit because at the moment if the current RFC’s are in place then perhaps even losing a large % of the network at once may not do as much harm as we think.
Good thing is we can try this by killing 80% of a testnet we host (like Alpha) and measure it (we do these kinda things in a less formal way). When the RFC’s are in place these results will be very interesting.