SAFE Storage economics - one-time fee, forever service

Ultimately, storage space is orders of magnitude cheaper than bandwidth. The initial storage cost is likely to be insignificant compared to the cost to serve the data. Therefore unpopular data will have very little burden on the network.

Popular data will be served frequently, but farmers are rewarded for this and caching will help here too.

Ultimately, the average PUT cost needs to cover the average cost to serve data. We will have to see how the dynamics of this play out, but there is an attractiveness to a one off fee, as long as it isn’t prohibitively high.

I also suspect that services will pop up which provide a rental fee instead, should be the market demand them. These services will hedge fixed storage costs and provide a monthly fee instead. With multiple signature keys, this could be done securely too.

Edit: RE rental, remember that data centres are stocked with fixed hardware too. They balance fixed costs against monthly requirements constantly.

2 Likes

If we get in fundamental trouble with the economic algorithms, the solution is not a rental model, but a pay-per-GET model (Yes, I know this’d be bad for adoption). The network is very efficient in data storage, but quite inefficient in terms of bandwidth (due to extra hops).

The idea that “old” data will be an overwhelming burden seems very improbable to me, again due to the exponential rate at which humanity creates new data. The data created from the dawn of civilisation until a bunch of years ago is just a fraction of the data we created from a bunch of years ago until now. Add to that de-duplication… Maintaining and serving “old” data will always be a relatively small cost.

Bandwidth usage by clients on the other hand is not economically limited by SAFE, and many home farmers will have asymetric connections so upload bandwidth is precious. The market for data storage is also a lot healthier than the ISP market. We should also not forget that most ISP’s have a “fair use” policy that will give home farmers trouble if they suddenly use lots of bandwidth compared to the average home.

2 Likes

I think the problem with pay per get is that these costs have previously been born by the data owners. If we make data users pay for this, they will not be impressed and will be unlikely that to embrace safe network.

The data owner hands off hosting costs to farmers with safe net, which is why the economics must change to reflect this, IMO.

out of sheer interest, do you have any links or numbers to line this point?

Common database load would be many more reads vs writes too.

I’m not sure if the 1%-rule can be translated 1:1 to SAFE, as the SAFE network is supposed to hold private data too.

One problem I see is that web sites stop being accessible to the public in general who have yet to full adopt the SAFE concept. Pay per put allows non-safe users to simply install the launcher to follow links (or see content) stored on the safe network.

Adoption is going to be difficult enough without making it almost guaranteed to not being followed by the ordinary internet user who gets “free” usage from their social media, google etc.

There are other stats about most files written to disk are never read again. I will see if I can dig out the link later.

Not so fresh info, but might help ;-).
Not sure if the time argument will be successfully passed.
Please, scroll to the 17th minute.

1 Like

I know, I was talking about a hypothetical scenario where pay-per-PUT would turn out to not work in practice due to an overload of GETs by all the clients that vaults can’t deal with in terms of bandwidth. I’m not a fan of pay-per-GET, just saying that it might turn out to be a necessity.

But those costs were/are paid for by ads at the cost of the end-user’s privacy, so they were still paying for it in the end.

2 Likes

I think this is why other networks “require” resource sharing during connection. Since new users already have resources (storage space, bandwith, processing) but no “coin”… it makes sense.

I do recall a discussion about running a client also runs a basic vault. If that is the case, then maybe we don’t need Pay-Per-GET?

New users still get paid Safecoin, which might encourage them to stay connected and contribute more resources.

1 Like

But think of those who only surfed SAFE for those ahem cat vids and got a coin or two while doing so, but then “throw-a-way” that account. Coins lost on a regular basis :wink:

1 Like

I’m almost certain “lost coins” will happen, especially with those not familiar with crypto currency.
Many will explore the SAFE Network’s internet 2.0 protocol… to view uhh… cat videos.
But over time, Safecoin will eventually become common knowledge.


An alternative solution could be a ZERO Vault. This is a node (vault) with ZERO storage. Since it does not store chunks, it cannot earn Safecoin from GET requests. But it does all the other vault functions.

The ZERO Vault contributes bandwith in exchange for FREE GETS. This reduces the Networks bandwith burden.

Users curious about SAFE run the Client with ZERO Vault on their “temporary” account. This allows them to explore and test the Network before committing storage.

Ideally, new users start with ZERO Vault and then upgrade to a STORAGE Vault by allocating storage. Hopefully, they realize this earns them Safecoin and not lose the account.

7 Likes

I guess my thought was those who visit to view a certain type of material and no other interest in SAFE. It is feasible that it could number a hell of a lot and that would be a significant number of coins after ten years :slight_smile: I doubt there is any plan to change to that model so its just musing at the effects.

Yes maybe the zero vault would be a potential solution. Supply all the node functionality incl caching without any payments possible. Good idea

3 Likes

Agreed,

Farmers are basically Users who upgraded to STORAGE vaults, and remain online longer.

6 Likes

I like this idea. Zerovault may also reduce network churn due to non-committed farmers going on and offline.

EDIT: maybe this deserves it’s own thread for further discussion.

4 Likes

How would Safecoin’s perpetual storage model work with stagnant data stored on finite storage mediums? Would the data be copied to a new vault once an older one goes offline? I’m still trying to wrap my head around how this would work and be profitable for the farmers.

Maybe a real life example of mass storage by people that has been around for ages and actually was designed initially to be a relatively decentralised system.

It was designed for universities, organisations and others to run a server holding the information and no one server had control. It also was designed to store all the information on each of the servers. Obviously designed very early on and SAFE is using a different system. its called NNTP (Network News Transfer Protocol)

Today it sees a small number of very large servers to hold all the binary data, which it was not originally designed to do. It is still heavily used and holds all the files that are being torrent with some servers retaining the files for many years. They do respect take down notices which is why they survive the copyright trolls.

All those files are currently amounting to 5-7TB per day of storage and each of the companies are adding storage to keep up, and some never deleting anymore (except take down notices). Oh and the 5-7TB per day includes all those spamming/scamming or trying to take down the servers by over loading them, or use up their storage faster than they can add it. AND too boot files can be added to NNTP at no cost other than a internet connection and at most is a few dollars a month.

Now we expect that the number of people and the size of their vaults will by far exceed that which any one company can. So if we take a proportion of SAFE users to world wide internet users we would expect that SAFE’s storage requirements will be far less early on. People will be adding 1TB vaults and if 1% of internet users use SAFE and 1/10 add drives then that is over 1 million vaults averaging (when needed) 1TB. Safe will dwarf NNTP and that is while SAFE is small compared to the available audience of NNTP.

SAFE would only need 10 1TB vaults added per day to exceed NNTP and maybe even all the “dropboxes” as well. Remember that (exact) duplication will not exist so all those duplicate copies of vids etc will only take the space needed for one copy. That is something NNTP and “dropboxes” do not (widely) currently do.

Now as to “stagnant” data, David has said that it will “gravitate” to archive nodes. These nodes are ones that either are set up by MAIDSAFE or shown that they remain on-line almost permanently and code (datachains?) can be used to validate them when they come back on line, so reducing the churning of their data stored. As long as 4 copies exist then no data copying is needed when one goes off line. In other words if there were data in only 4 nodes and one archive node went off line than that data would be copied. Then the node came back online so there is now 5 copies. If another archive node that happened to have a copy of that data went offline then no copying of that data is needed since 4 copies still exist.

tl;dr

eventually stagnant data will exist in more than 4 archive nodes and the occasional downtime of an archive node will not cause significant copying of data in the network due to previous occurrences of the downtime causing its data to exist in more than 4 (archive) nodes eventually.

One would expect there to be many such archive nodes and as tech improves the number increase.

Also storage space technology is still progressing very fast and as fast as we can fill up drives.

When I was a young engineer we used 128Kbyte floppys disks and fllled them up quickly, then 360KB, 720KB, 1.44MB etc. They were all filled quickly by the average user. Then 5, 10, 20, 60, 200MB drives which also were filled by the average user, but took a lot longer to do so.

Then when we got to GB range of drives very often it took the life of the computer to fill the drive. So while data uses was increasing the drives were increasing faster than the average user could use them. One reason being that CD drives held a lot of the duplicate data, a lot of data increase was research data and corporate data, not he ordinary user/worker.

Now that we have TB drives, I find it rare to see any average home user even using 1/2 their drive for the whole life of the computer. In other words there has been both a increase in de-duplication because of the internet and DVDs/etc and the rate of drive size increase is still exceeding the data storage requirement increases.

This is good news for SAFE since farming storage can exceed user requirements without any excessive addition of storage space by users, but by users allocating 1/2 of their 2TB drives to a vault.

11 Likes

One payment, store for eternity absolutely will not work and will eventually yield to a time-to-live, rental model. This has nothing to do with how fast storage is getting cheaper, because no matter how cheap it gets it is still finite whereas eternity is infinite, and the return on farming must diminish towards zero.

So what will happen if they continue with the one-payment architecture is that the network will be forked to a time-to-live architecture. It will be forked by farmers who want a better rate of return, and the one-payment network will be abandoned by most farmers (except for ideologues or the delusional). The most freeloader users will stick with the one-payment network but the users who want a better performing and more reliable network will go over to the time-to-live network. So the one-payment network will have too many users for the number of farmers and its performance will go through the floor, while the time-to-live network will achieve an equilibrium of acceptable (for most users) performance.

Whether it is forked also will have nothing to do with the commercial, or any, licence. If the code and the incentives are there, it will be forked. So the devs should take these realities on board as soon as possible. Economic incentives will trump airy-fairy notions of “freedom forever.”*

* Not to mention that there is probably some confusion on the part of the one payment people between free as in “free beer” and free as in “free as a bird.” They really want the second kind but the means is to attempt an approximation of the first kind (because a one-time payment divided by eternity is zero).

2 Likes

Someone did some analysis of it a while ago and produced a reasonable argument for the ability for it to work. Also one point to remember is that if two people upload the same content both pay full price but only one copy is ever stored.

But I do agree that when we cannot have 10x increase in data storage every 5 years as has been the case since magnetic storage has been used, then we may see the need to delete (unused) data. But some write once storage systems are breaking the 10 fold increase model with 100-1000 fold increases.

We shall see, and the analysis done elsewhere indicates that the need to delete is a long way off, but yes we need to be aware of the issue and plan for it before it becomes a problem.

2 Likes