What about a catastrophic event that wipes out millions of nodes

I’m not entirely French but also vote for replication. I’ve never really liked dealing with anything other than raid 0 or 1, although raid 1 doesn’t really offer much data security since it doubles your troubles just as fast. :sweat_smile: One of the things that attracted me to safe was what I perceive as its ability to do away with all the luks, raid, lvm, snapshot, rsync-hardlink, zfs nonsense.

The point where storage capacity and related technology increases faster than the generation rate of human information is an interesting scenario. Until we get to this holonomic inflection point where each node has the capacity to store all information contained in the whole network, I’ve often wondered how difficult it would be to offer the user manual control over the number of non-sacrificial replications per chunk. (Does the network still deal in sacrificial vs. non-sacrificial copies?) IMHO the only things blockchain do well are immutability and redundancy. SAFE wins on immutable generic data support so it would be interesting to look at the cost/benefit of allowing individuals to specify the redundancy setting either on a per file or per account basis… potentially allowing them to store a file across across all existing network vaults to achieve the maximum redundancy. (Ex. A 1MB file per vault for ~10Billion vaults for a cost of ? $ per chunk = ? $)

I see why you might not want this functionality in the network layer, and understand the design decisions behind fixing the count globally, so it makes for the possibility of an interesting app or utility that would automate the more manual approach to increasing the non-sacrificial count (ie. pre-encrypting the file in multiple different ways or saving the same data to other formats). From a psychological perspective it would be reassuring to know that a for little extra safecoin one can ensure that 64 or 128 non-sacrificial copies of a birth certificate or deed would always be maintained, even if the algorithm tests show that one doesn’t really “need” more than the network’s default 4 or 8 non-sacrificials to ensure safeness. Humans will be human.


I get your point about having it in the app layer will lead to “needless” duplication that reduces network efficiency, but this will only be the case if people use such an app to reupload a lot of common public data… I’m not sure if there is an incentive for that. Private unique data is unique by definition so not really a factor. To a certain degree isn’t manual duplication of public data via an app level utility essentially a free market vote as to the data’s relative importance? If you do try an implementation in the network layer, then it seems like it would lead to a variable redundancy setting for each chunk… which may allow for some interesting optimizations related to upload popularity and caching… but probably not feasible from a simplicity standpoint or worth the bother… maybe it is… don’t be afraid to brainstorm. I’d say brainstorming always saves dev time, as long as we give reasons for why they should not waste time on something. :grin:


My point isn’t about it being in the app layer, just that it should be somewhere and on by default.

Many file uploads will be duplicates. Such as backups of an operating system. Users won’t want to sort their files into public and private; they’ll just upload everything. I think most uploads will be unique, but why not save the users a bit of money by letting them upload their OS for free? :​)

This is a goal of the network from the start. Not let them upload as such, but have immutable copies of OS’s that are security audited and updated properly.

Taking it further though, micro kernel OS’s where on login you “boot” into a secured OS from any device is interesting. There is a lot to it, but a secured decentralised non owned network that has immutability built in is a great way to skip any need for virus checks etc. at least of OS related files. So we can squash the attack vector a bit and remove a swathe of attacks on people. I wont go into it all again, but there is a stonkingly good project that has many of the bits already in place when we launch. Secured boot against data that is received and hash checked etc. is great, not the full story by any means, but a great start (the kernel you boot from can be attacked so it reads bad hashes as ok and all that stuff). In any case we provide a new mechanism that is well worth investigating here.


Yes, but we could define a target. Something like: “We accept that one of a trillion 1 GB files (that is, any of their 1024 chunks each) would get lost with a 50% chance over 250 years.” The target itself is not as important as being able to reason about probabilities of data loss under different circumstances.

Could you at least name it? This got me very curious.

1 Like

Sorry I did not mean one was in existence, I meant this was a fantastic project waiting to happen. Sry for the confusion.


Its as if SAFE is a viral hologram. Its starts off as a viral crystal. Its an Indra’s Net.

There might be an easy way to do it following the same procedures used for diskless
network booting. Basic description here:

You could use rolling hashes to split the chunks (IPFS does that), so say you add a byte in front of a file, currently all subsequent chunks would also change, but with rolling hashes just one chunk would change => more deduplication.

1 Like

This does sound like a promising idea.

But I mean more along the lines of preserving files from existing OSs, such as the millions of Windows 7 installs that people keep around on their disks. OSs are just an example, I’m sure there are many cases where users might want to upload content that contains mostly duplicated content.

Wikipedia was mentioned …

Did come across a solution somebody devised some time ago:


At Wikipedia itself they have a page “Terminal Event Management Policy”:

Wikipedia - Committal of articles to non-electronic media

Following the implementation of the level 2 warning, editors are expected to commence the transfer of the encyclopedia to other media. As an immediate measure, it is suggested that editors print as many articles as possible, with due regard to any personal safety concerns that may be faced in these extraordinary events.

In order to assist subsequent collation initiatives, it is recommended that editors utilize paper sizes most commonly used in their localities – Letter in the United States, Canada and Mexico or A4 in most other jurisdictions.


Attention should be paid to the manner of storage of articles once printed. Over the medium-term, copies of articles can be stored in suitable air-tight containers in a temperature controlled environment.


You forgot the banner:

No, I did not :sunglasses:

For a short while I considered starting a novelty business where popular books, scientific papers, patents, poems, personal writings or artwork could be laser etched or cnc cut into semi-flexible sheets of metal, ceramic and/or stone laminates and arranged into a ruggedized book. The idea was to “let your ideas last forever”. Inspired by the Sumerians. I thought it might be nice for a large public work to also try and print out a rolling release of wikipedia once a few decades in this format (expensive yes, ~1M pages). The other option was some kind of laser etched tempered/gorilla glass sheet that you could use a projector to cast the image onto a surface (fancy slide projector). For more durability I thought that a combination of polarization and/or lenslets might offer multiple layers of viewable information so a book could be etched in a single polycarbonate, ceramic, or tempered glass tile so to speak. And then there was a modern stone-henge where all of wikipedia (small print) could be engraved on the surface of the stones (yes, you would need a lot of big stones). I was told by friends and family there was no market, that all of this was just too impractical when people have the internet as a reference, and no one cares about things lasting that long. I still think it would be a fun research project to try. I suppose it is more practical to store binary at the micro or nanoscale, but it is nice to have the option for secure access to information that is ultra durable and doesn’t require the use of a computer other than the one in your head and a few simple hand tools. Maybe only a civilization’s most “important” data could be stored this way, and information that would give someone enough knowledge in order to to build a computer that would be able to unlock the rest of the knowledge stored on fancy M-Disks or super quantum ssds that are found laying around gathering dust or rust. After SAFE launches, maybe there will be more of a market for that kind of thing, or at least more interest from people in attempting similar sub-projects. A lot of other things to do first I suppose.

1 Like

haha I thought that was a Passport with extra travel pages :laughing:

If you’re passionate about this, check out: https://archmission.org/

They have similar ideas. Storing data on same medium in different ways, accessible for different technological levels: Readable with magnifying glass, with microscope, with laser, and so on. Each can contain more information.

They are looking for contributors and else.


Very interesting, thanks for sending this my way… they weren’t around circa 2006 when I was toying with some of these ideas. Their 5D optical quartz is a bit more sophisticated than what I had in mind, but rather applicable to SAFE. I still think there is some value in non-binary data preservation though…

1 Like

I found them through Falcon Heavy :slight_smile:

Just saw your edit, very interesting.

From their FAQ:

Our approach in designing the Arch Libraries is to include different layers of information, for different audiences, with different technological capabilities, where each layer teaches what is necessary to access the knowledge on the next, higher-resolution layer of data. The very first layer of data has to be visible to the naked eye. The next layer, to an optical microscope, then next to a recipient with a laser and a computer, etc.

1 Like