Thought experiments by those who do complicated thinking seem to indicate it should theoretically be just fine.
Our job will be to prove them right or wrong
It intuitively seems like it should be sufficient, as long as the mechanism to replicate when one host goes offline is quick & robust. The chances of 5 effectively random nodes all disappearing in quick succession seems incredibly small, assuming a decent sized network.
Haha, missing context here is that we retry afterwards. Basically it looks like the odds of a retry being successful in this verification step are slim. (Normally the price has changed), so let’s carry on with the rest and then we retry/repay in a later step.
It’s actually good to have a pause to run some OS and router updates for example. Sort out some test data as well. And maybe do a bit of gaming that I’ve been neglecting!
The Year of the Testnets was pulled over by the development Cops and fined for exceeding the testnet speed limits. Once the required number of PRs have been paid, testnets will be allowed to continue as long as the fast lane is used in future.
Fixing some mem issues we see in larger testnets w/ the royalties (progress coming along nicely there). If we get that sorted before PUT simplification that’ll be the one.
Otherwise we’ll see some testing of the PUT simplifications when that’s made it into main.
ORrrrrr if folk are really bored we could try out the latest patched libp2p and see how mem leaks are looking. That’s not that exciting from my POV and we can passively assess that via the other tests.
I vote against. Boredom is a problem, but exhaustion is too. I’d rather se us hungry until the more useful test cases are ready, and then go with vigour again.