What about a catastrophic event that wipes out millions of nodes

Yes, this is true. Some of my posts above were based on misunderstanding/ignorance, and thinking that SAFE was one thing, when it is in fact something else. I’ve had some more time to educate myself since then, but I still support and am thankful for MaidSafe’s choice of GPL. I hope they stick to their guns. It just seems to me that SAFE as an autonomous network is most powerful/resilient as a communication tool when all nodes are running the same code set (i.e. KISS), and that GPL will make that most likely to occur. I know there are various perspectives…

Awesome.

SAFE@Home: A little random brainstorming

The idea of being able to setup 4 to 8 (or 32?) nodes on a private SAFE cloud on my home LAN that would only contain redundant copies of my own data ready to be served at high speed (ex. ip over infiniband) and low latency is rather attractive. “Backing up” my redundant home network with the scale of the planetary network (a SAFE PLAN ?) makes me feel even safer in case my house is flooded or there is a fire or a lightning strike. Although I’m not sure how well you could really set up a local SAFE network like that now. I’ve read that security improvements have led to the procedure that nodal data is now discarded when a node goes offline/churns, to then be filled back up again when it rejoins. Consider this :

I can see the robustness, security, and simplicity this offers to the network, but it would seem that using the same policy for small scale local SAFE on home networks for just my files on my LAN or your special files on your LAN and similar “micro cloud” use cases becomes challenging to accommodate. Maybe it doesn’t need to accommodate those uses, which is fine, but doesn’t SAFE’s power come also come from being scale agnostic beyond some minimum size to maintain consensus. It’s also likely that the MaidSafe team has already accounted for these use cases and I just haven’t read enough to find it, so my apologies if this consideration seems foolish. Anyhow, none of it is an issue if you just plan to restart your freshly reconstructed LAN by pulling data down from the global/planetary SAFE network, essentially treating the local network as a high performance but volatile cache. It’s just that under this scenario one might be without local access to data for an extended period of time if your internet connection was down or your isp or meshnet was having long term connectivity problems.

In regards to the main question posed in this thread, I would say that the microcosm of the local SAFE@home or SAFE cache mentioned above and its use of a planetary/global SAFE network as backup in order for data to survive a localised flood/fire/lighting strike is analogous to a case where humanity’s use of a global SAFE network survives a flare, EMP etc using backups in an interplanetary SAFE network on the Moon, Mars, Europa or somewhere else “off-site”. Since this isn’t feasible for the near to mid term until Elon meets his goals it would seem that an ideal solution to the problem of global reset may be found by solving/considering the local problem under the constraint that off-site backup is not an option. I’m not saying I have a solution for this, but rather saying that the microscale perspective may be a good way to attack the macroscopic issue. Mr. Irving mentioned that the archive nodes are likely the means by which the system could knit itself back together to survive a global reboot and I don’t doubt him. However, it might also be good to have other more localised redundancies (which appear to have been included during earlier versions of the network design? ) built in as well to ensure a backup plan for the backup plan. Perhaps this is exactly what he means by reintegrating the data from isolated peers via data chains. Very fascinating and eager to learn more.

TEOTWAWKI vs. Intel vs. Pop Culture

Also, I don’t think one really needs to concoct dire edge case scenarios either to consider these issues. Perhaps some unknown CPU hardware exploit takes out 90% of all nodes, let’s say for example those running on any Intel chip manufactured in the past 25 years perhaps (I know, ultra low probability and just a spectre of the conspiracy theorists imagination…). Instead, you could also consider things from a marketing/user perspective. Let’s just say some famous celebrity on twitter asks the question, “What happens to my selfies and cat videos in the SAFE network if the entire internet and/or all nodes have no electricity or battery backup for a day?” (Although I am willing to admit that an unscripted pop star discussing network battery backup requirements may be less likely than a global power outage). Imagine MaidSafe marketing having the ability to respond to that pop star’s question with something like “Nothing.”, or, “As soon as you are able to get your internet access back, the SAFE network and all your data will be there waiting for you.”, rather than, “In that scenario you have bigger problems to worry about.”, which may be the most true statement, but it is also the least cheerful and awe-inspiring. Most human endeavors can get away with that type of reasoning, but SAFE has set a higher standard for itself has it not?

Solving less dire “dark minute” or “dark day” black swan scenarios may likely yield a solution to more sever but very low probability crises as well, however I don’t think any of us expect MaidSafe to be focussing on these edge cases at this point nor would I want them to take their focus off the current tasks at hand. Theorising on topics like these is what they have us forum users for right? :slight_smile:

p.s.

I found a few other old forum threads that are related to this discussion and might be what the OP was referring to:

1 Like