We’re looking at a data loss issue right now which would explain the login failures, and there seems to have been a handful of vaults which had connection problems. These caused high levels of churn in a section of the network, and data wasn’t able to be relocated during this phase.
It would be really helpful if anyone has any of the logs for nodes e3d8ea, e3a00a, e3207d or e29327 to see a copy of them. The logfile(s) will be in the same folder as the safe_vault executable and will be called “Node-<timestamp>.log” where <timestamp> is a ten-digit number.
A small network will naturally be liable to alsorts of edge cases. If someone starts and then stops a few, that’s a large fraction of disturbance that might not help if others are turning over data and requests. One node can only do so much; many nodes reduces all kinds of stresses. It’s a wonder it’s stable with what I guess is only a couple of hundred nodes.
Maidsafe does churn tests, specifically to catch these issues. Churn is something that’s expected and designed for. I think it’s more likely to do with some change they made between 15 and 16.