Poll: What is the role of optical storage media and how can it possibly compete on the SAFE Network?

Write once optical storage media offers certain advantages for low power, long term durable storage of data that is immune from electromagnetic interference (perhaps not the drives that read a write to the media). Examples include blue-ray, m-disk, Sony’s archival disk, and the 5d quartz storage used by the Arch Mission Foundation. The challenges with these technologies include higher cost, higher latency, slower read and write times, and lower data density (currently).

Given the competitive nature of chunk retrieval, is there any way for this technology to compete with traditional HDD and SSD technology?

Should anything be done to encourage optical storage technology by incentivising it’s use via the SAFE farming algorithm?

Edit: Food for thought…



There was a good post by @mav a few weeks back about data storage stats. One interesting fact was that most data is stored, immediately read, then only accessed infrequently afterwards. For this sort of data, robust write-once storage options seem almost ideal.

I suspect an ideal setup would have hot copies of new data stored to solid state, prior being replaced with long term cold storage solutions. Essentially, a cache for these long term, cheaper, storage mediums. The cache can then be freed up to be used for new data, which would expect some subsequent read access.

Given that we have appendable data types, SAFENetwork seems to fit this model well. Subsequent changes would be stored on hot, fast access media, with related data being pulled from cold storage to hot storage too.

I suspect the economics of such an architecture would encourage its adoption, as storing rarely accessed data on expensive, fast media would seem uneconomical.

@mav link here: Object storage prior art and lit review


That’s how it would seem but I’m not so sure it is that simple unless optical write once media could surpass the storage density of hdd by one to three orders of magnitude. In order for optical disks to be economical you need them to be stored cold on a shelf, not 1:1 in an optical drive. Typical latency for a robotic optical archive to fetch could be about 30s to 1 minute (Edit: this product claims 2s to 6s). The infrastructure for that is rather expensive. Competition with 8 other farmers for the same chunk means you always lose if just one of them have the chunk tucked away somewhere on a big old hdd. Also, SSD drives consume much less power than an optical drive in standby/idle so the SAFE Network economics may favor a fast giant 100TB ssd.

In order for the benefits of optical worm storage to be taken advantage of and supported, I think the farming algorithm will need to be adjusted to reward more players rather than just the fastest to return a chunk. How long a chunk has been in storage also needs to be considered in order to incentivise keeping old data around in perpetuity.

1 Like

For a proper backup, you can’t really get around optical media. Virtual tape libraries (big boxes of disks pretending to be a tapedrive) have been around since forever but they’re complex and expensive and usually can’t really held up to their promises.

Also if you think tape is slow… tapedrives are the only devices that can actually saturate a storage switch (fibrechannel switches come with 16/32 gbit ports nowadays).

I suspect otherwise. The network surely needs to verify at times that people are still holding the data that they are supposed to. It is too late if the infrequently accessed data is requested and the farmers supposedly responsible for it all reply “gee, I didn’t think anybody was going to request that one again so I decided it wasn’t worth keeping around”


IMO, until we have holographic storage, SSD’s are going to be the king of the hill.

It depends on how slowly they respond. It may be that all copies of the requested data are in cold storage. Having one respond within 30 seconds may be acceptable for the data in question.

As the network will be autonomous, vault operators will explore all sorts of techniques to maximise efficiency. Indeed, the networks long term health and viability will depend on it.


They also have terrible seek times. Great for streaming entire volumes, but awful for retrieving random blocks of data.

1 Like

Truly stale data may only exist on cool/cold storage, so they may be competing with other slow sources anyway.

For some old/stale data, availability may be far more important than actual raw performance. Obviously, it is ideal to have both, but thawed data will become warm again while it remains in immediate demand.

Every chunk stored could have a block number (equivalent) metadata attached to it. Farming algo could then give proportionally higher rewards to serving of earlier block numbers.

Not sure that specific approach would be desired though. Even with caching it rises the incentives for fake GETs. Seems necessary that it pays off (in a simple way) keeping them though.


So very true :smiley:

1 Like

I don’t think SAFE can allow any cold data. All data will need to be warm since vaults need to constantly be probed to make sure they actually have the data they are supposed to have. This means bandwidth consumption is maxed out 24/7 and chunks are always flying. A transmitted chunk would either fullfil a client request or an audit request. Maybe MaidSafe has some more clever ideas on how to do this well. I think it’s a pretty critical feature that is required for survival of the network.