It doesn’t make any sense, because only a tiny fraction of the total BTC is sold on a given day. Similarly, on any given day in SAFE only a tiny fraction of the total SafeCoin in existence will be used on data storage. Just like you get a lot more USD for a BTC than in my example, a SafeCoin will get you a lot more MB of storage than in your example.
Blame the messenger all you want. It may not make any sense to you, but the way it works is not disputed.
Fractional-reserve banking ordinarily functions smoothly. Relatively few depositors demand payment at any given time, and banks maintain a buffer of reserves to cover depositors’ cash withdrawals and other demands for funds.
I didn’t say anything against the way space reservation works, I just said that is how it works and I think that’s one of points DYamanaka mentioned a long time ago (didn’t you notice and/or understand it back then?).
Your USD/BTC comparison I think does not stand because BTC isn’t a liability of the USD issuer (and USD isn’t a liability of bitcoin miners), so the one doesn’t have to be fully “backed” by the other. I mean even SAFE capacity doesn’t have to be fully backed by Safecoin, it’s just one of possible approaches.
This is unfortunately true. The fact is there are 430 million Safecoin waiting to claim X storage on Day 1. As @janitor said, it’s possible too many SC start chasing too little resources at the beginning.
However, I doubt 100% of holders will use SC to buy storage. Maybe 10% will test the waters until a killer APP arrives. Also, deduplication allows the Network to sell more storage than it actually has. That will be interesting to see!
Related Subjects somewhat off-topic. This is just food for thought.
Farming Behavior
If Network storage fills quickly… the cost (SC per X storage) will rise until it becomes too expensive in fiat terms (more than $0.10 per GB). New farmers may come for profitable payout during this time, driving the Network cost back down.
If Network storage remains empty… the cost (SC per X storage) will decrease until it becomes insanely cheap in fiat terms (less than $0.05 per GB). Old farmers may leave due to low profitability, driving the Network costs back up.
Blockchain miners use http://www.coinwarz.com/cryptocurrency to determine their cost/payout.
A similar site could emerge showing (Storage Resources per SC), and the fiat profit/loss ratio. Maybe they will include cloud farming.
If blockchain profit-mining psychology carries over… SAFE farmers are likely to flood at the beginning and keep farming until there is a price spike, then cash out. Then non-profit farmers will take over.
Bubble Behavior due to Fiat Valuation
Will speculators drive the fiat price based on SC Network Value?
If that is true, it becomes a double edge sword up and down.
If SC keeps buying more storage over time, the market drives up the fiat price, leading to more farmers joining, like a super bubble… until it pops. It pops when enough SC holders decide to take advantage of the super low cost and dump libraries of data onto the Network.
Now SC buys less storage. So the market drives down the fiat price, causing some farmers to leave, which makes SC buy even less storage, until it crashes.
People who farm for personal consumption help stabilize this equation. We’ll see how it turns out.
As an investor, I say bring on the Rabbit ride! As a consumer, I say give me a Turtle alternative please.
The Incentive Pendulum
The best time to farm is when the Network is nearly full… because the SC payout is high.
The best time to sell/spend SC is when the Network is nearly empty… because SC buys the most storage.
That´s a good point and it´s not only that cloud storage is always temporary - current Cloud services do not deliver data as they are meant to be provided by SAFE (ie secure data that cannot be corrupted by a malicious host) so comparing numbers here is kinda like comparing apples and oranges.
Maybe 100KB of SAFE data IS worth to pay 1$ - it really depends on the use.
I agree however, that to most end users that is not going to do it, but time will tell how the price will change.
What are the reasons that lend credence to the view that there are no ongoing maintenance costs for data? i.e. that data can be stored forever for a fixed fee.
Is this due to the believe that de-duplication and a continued reduction in the price per TB of storage will counter maintenance costs? Or are there other reasons?
If only the first two reasons exist - I will point out that these are just beliefs not facts. So, what if these beliefs don’t pan out? Can the network build-in maintenance fees at a later time or would a fork be required?
Exponential cost reduction. If the price of an MB of cloud storage halves every 2 years, then as long as this is true the total price never reaches two times the price of the first 2 years. So if 1 MB costs 1 SafeCoin for the first 2 years, the total cost never reaches exactly 2 SafeCoins:
And so on, it never reaches exactly 2. Of course, the exponential reduction in cost isn’t constant like that in practice, but if you charge for example 3 times the price of the first halving period, then reality is allowed some flexibility.
Anyway, the idea of a fixed cost for “infinite” storage is mathematically feasible, as long as there is an exponential reduction in cost.
Hm… actually it´s kinda problematic if things rely on Moore’s law-a-like-assumptions. Exponential growth on linear time is highly unlikely and even though it has always been debated, numbers show that stuff shouldn´t rely on exponential cost reduction because things may fall into pieces if that logic doesn´t apply anymore (which it already did).
I recommend to read the section “Moore’s Law Was Not the First, but the Fifth Paradigm To Provide Exponential Growth of Computing” in: The Law of Accelerating Returns « Kurzweil
One could accuse Kurzweil of being a dreamer, but he his historic data analysis is spot on. I think SAFE’s major operational cost will be bandwidth, so we should probably look at progress in that area more than at Moore’s law. (Edit: Ninja’d by @anon40790172, the bastard!)
But you are right that we can’t rely on stable exponential growth in the design of SAFE, there will be at the very least slowdowns and hiccups for sure. That’s why I am wary of issuing all the 2^32 SafeCoins by default, SAFE would economically be a lot more secure if it keeps a significant amount (say, half of them) in reserve for when the economic landscape undergoes major shifts. Any longer term shortages in PUT income can then be filled by expansion of the supply. Yeah, inflation sucks, but it’s better than not being able to pay farmers and a subsequent collapse of the network.
A couple more arguments (at least for public data):
Data begets data. Think comment replies, and meme - counter-meme creations. Software upgrades and sequels happen.
Content is consumed. It is impossible to view content for the first time twice.
Entropy always increases. Content become stale quickly. Arguments evolve. New research data is correlated. The 80’s ended.
Also for private data, the point has been argued that most is PUT and then viewed (a couple times, maybe) soon after, and then hardly ever (if ever) accessed after. Think big insurance companies storing files on particular claims. Used a lot during the claims process, but then kept in an archive on the off-chance that it needed to be reviewed someday for some reason.
…and that´s the main problem here. One can see it in Kurzweils graph, but how was the data selected? What is the criterion by which Kurzweil selects different technologies and how does one value the qualitative difference? Kurzweil deduces from his theory of exponential growth and selects those technologies that suit his thesis. I can´t see how exactly one can measure the relation between the integrated circuit and the opposable thumb - can you?
That´s also problematic because apparently qualities are shifted arbitrarily around. For instance: The separation of information and metainformation is one step to make the data more efficient without raisig the space over the network. By saving one and the same file billions of times we are “wasting” space - and once we move to SAFE we may not do it anymore - allowing the exponential curve to become more likely. So I may agree that hybridization may make things more efficient without necessarily being reduced to advances in storage technology - however it does affect the autonomy of participating actors. The ideal form (“singularity” or some other kind of “final” idea where we are going to) may include people not making choices themselves (because that´s unefficient). This means that from the perspective of a Singularity exponential growth may work, but without taking into account human valuation systems (like money). So economically I think it remains critical - which doesn´t mean that one shouldn´t try.
Another consideration is that all data is not equal in SAFEnetwork. There are currently at least three classes of data:
popular - gets cached
default
stale - gets archived
The costs of each varies, and over time it will be interesting to see not just his costs vary, but also the balance of these.
Exponential growth can apply to demand as well as capacity/$ so @TylerAbeoJordan raises a valid question. As with the PtP debate I’d like to see what happens! Obviously this “test” will be over a longer timescale, but I accept that this is a big experiment, and not all of it will be right first time. Maybe SAFEnetwork will be able to evolve - that’s David’s plan certainly - or maybe it will show the way for something better.
It’s high risk with or without this, so don’t bet more than you can lose on it, or any other single outcome for that matter. I’m 100% behind SAFEnetwork, but I won’t bet my future on it all the same.
We have to take risks to find what works, and to do something genuinely new and make a difference. I don’t want to compromise our ambition for fear of failure. And I’m actually a cautious fellow - as I said, don’t make yourself dependent on the success of a very risky venture.
I actually made a mistake this time (but not in the original post linked from my post above): 1 MAID is currently $0.012714. Let’s make that $0.01.
So 1GB on SAFE Network would currently have to cost 1,000 MB * $0.01 = $10.
I came back here because two days ago my result seemed quite odd and I remembered that I got a much higher cost of capacity when I calculated few months ago, so today I came back to review.
This is why I said in the “Day One Scenario” post that the price will have to dramatically fall or the amount of storage will have to dramatically (by 1 or 2 orders of magnitude) increase. Or some combo thereof.
As I said in the original post linked above, if one buys 100 MAID for $0.01 today, converts that to Safecoin on Day 1 and discovers he has to pay 100 Safecoins (that is, $1) for 100 MB, he’ll probably try to sell 100 SAFE and buy 100 MB for $0.2 someplace else. Of course he won’t be the first one with that idea.
Some may wait for the available capacity to increase so that they can get more storage for their Safecoin, but the problem is why would it increase if everyone is waiting. In theory at least, rewards should be falling, not going up. The exchange rate and amount of free storage may very well meet somewhere in between (after a large drop in Safecoin exchange rate).
What’s interesting is that except DY and maybe two other people, others refuse to consider these scenarios. To them things are somehow supposed to work out, although possible mechanisms remain unknown (well, my scenario above is known, but not acknowledged).
Vaults have a farming rate (FR)
Vaults can query the total number of client (NC) accounts (active, i.e. have stored data, possibly paid)
Vaults are aware of GROUP_SIZE
The calculation therefore becomes a simple one (for version 1.0)
StoreCost = FR * NC / GROUP_SIZE
Therefore a safecoin will purchase an amount of storage equivalent to the amount of data stored (and
active) and the current number of vaults and users on the network.
Change of plans on PUT cost algorithm. But even in the old system, the initial value of 1 SC == 1 MB was expected to change very quickly to find a supply/demand balance.
Edit: I haven’t analysed this new plan yet, so no idea how this works out. I must say the PUT price algorithm is in my mind one of the most important ones to get right.
What is the point you’re trying to make? I see you’re trying to prove or point something out but it’s not really clear to me.
This is the first PUT only (the network has to start somewhere, and I want to highlight this statement so people won’t start thinking that 10$ for 1GB is the standard/“current” price.
I didn’t say that’s something that makes the approach unfeasible. I said after the first PUT the price and supply will have to converge somewhere and I think that will happen primarily in the exchange price of Safecoin.
It doesn’t have to be that way, but there’s very little discussion on this topic of Day One which was originally raised by DYamanaka (predominantly from the capacity/farming rate perspective).
For some significant capacity-consuming apps (backup, video), you’ll have people looking at how much capacity they can get for a unit of fiat money. It’s going to take a while for personal Web sites with cat photos to consume 1PB of storage to significantly impact the algo (rewards, etc.).
If 1GB costs $10, people who need more than 100 MB will think twice.