**Edited post for clarity, hopefuly
**Edit This scenario would be framed with the following in mind:
SAFEnet aims to rid the world of servers within 10 years, an ambitious goal for sure. If this goes according to plan, the speed of the SAFE network would be super quick, we’d be running apps straight off the network over fibre. Wireless and mobile would be way better, battery life would be great, every device with storage would have a vault etc
So it’s on the way to that sort of environment I’m placing this scenario. Currently there’s a lot of hype around the Private/Public hybrid cloud (files)…but the next move (for conservative enterprises) might be some kind of Private/Public SAFE network scenario…thats how this scenario is framed. The mindset of storing ‘files’ is gone…it’s all chunks now.
Larger organisations are rightfully wary of trusting their data to public cloud infrastructure and so they utilize this same OpenSource software to build their own Private Clouds and sometimes go Hybrid with a Public cloud.
Fast forward say 5 years from now, the business has recognised the benefits of SAFE and have run a test network on their client machines. They want to go 100% internal SAFE, but how do they now get redundancy?
Running a private SAFEnetwork over one large site, provides no redundancy in the event that site is wiped out. If running several sites it wouldn’t be a problem.
Conversely, if you ran a single site business entirely on the Public SAFEnetwork and the communications goes down, your relying on local vaults having all the chunks.
I’m wondering, could the public SAFEnetwork provide redundancy for their private SAFEnetwork (in a one site scenario), similar to how Hybrid Private/Public cloud works now?