I don’t think it affects the technical performance. unaccessed data by defintion won’t be stressing the network much performance wise. I think you mean cost wise. But in this instance just forking the network over and over again would get you a lower cost at start - so no need for any technology to prune old data to acheive a lower-cost network. The issue for users though is access to the public data that’s on the first network. After all how many times do you want to copy the data from the old network to upload to another new network.
So there is a first-mover advantage here for an effective permanent data storage network that would become a defacto standard.
When nodes drop out, other nodes need to replicate the data which requires data transfers. It’s performance impact will be lower, I agree, but won’t disappear.
And when I am talking about performance I’m looking at the economy of this network. If there is constantly the burden of inaccessible data to keep track of, this economy (safenetwork) is at a disadvantage in comparison to other economies/technologies.
@cryptoidiot I took 20TB as an average between now and 2100. But the calculation above doesn’t require these numbers to be spot on. If the 20TB is unrealistic and 2TB would be more reasonable then it impacts both variables and the ratio stays the same.
I don’t disagree that this data is a burden, I think we are in accord on that. My point is that there is a cost to switching to a cheaper network as well. From copying over data, to then having to repeat pay for data that expires somhow if not accessed frequently enough, to losing the network effect of others who use safe that you may be coordinating with if you switch, to risking you data to a new network. All up there are many costs and being the first successful data network would give us a powerful first mover advantage. Hence people would be willing to bear some degree of the extra financial cost/chunk in order to not have to risk the other potential costs of switching.
Ah now I understand your remark better. Thanks for clarifying.
In regards to this problem, a network update mechanism used to be part of the network. I’m not sure if this is still part of the network?
Data stored on the network could in the beginning be flagged as “This is stored indefinitely” while with a certain network update new uploaded data could be flagged in different forms such as "This is private data and can be deleted after I die (auto delete after 120 years). This way we have a way to address this burden from becoming too large to carry.
Yes, there is an upgrade mechanism. They’ve been testing it too. It’s very important for the network.
I’m not sure what solution if any is possible here, but I can’t say that it’s impossible either.
As it stands now though there is no network consensus for time - so for now no means of agreeing on what data should go based on time or access rate over time.
I believe though that the developers will do all they can to keep the network alive, so I would expect solutions to be created if there are obvious threats to the model down the track.
The “trick” is that the algo has to account for the long lasting of data.
It is along the lines of the frog who jumps half the distance each hop for a infinite number of hops. A very old question (>>100 years) and the intuitive thought is that the frog will jump an infinite distance. But in fact it is 2.
The same with data store and using the current 10 times every 5 years. We see the “data” jumps 1/10 the distance every 5 years. (with SSDs it appears to be even less than 1/10)
So the “trick” is to include the costs to cover that “distance” of the jumps. (IE << 2 the cost to store the data today)
In 100 years we will have something like 100,000,000,000,000,000,000 times the size of individual units and the number of people with PC and number of people with multiple computer units means that the 10^20 increase is a lot less than what it will be.
Quantum computing will not only bring massive increase in compute but will cause a quantum leap in storage capacity due to the technology development. Quantum will sometime in the next 20 years bring a 10^10 or 10^20 times increase within a decade from then. And continue from there as larger than a CC in volume units are developed.
I agree with @walletjew on this topic. Since I have learned and invested into Safe Network many years ago, the issue of “pay once, store data forever” always seems counter intuitive to my economic understanding. There is no doubt that there is cost associated with storing data, even if the cost dimishes by order of magnitudes over time. If there is continious cost associated with keeping up a service (storage of data), someone has to put up an incentive to participate and provide that service (= pay for it). In my opinion this is a severe economic flaw in the design.
Yes, there are other factors which make this effect somewhat smaller, notably:
Growth: Influx of new uploads pays out existing nodes for replication (but this payout gets smaller and smaller over time while I doubt that the cost associated with storing gets cheaper at the same rate)
Active users: Instead of “farmers”, users provice their own storage node in exchange for using the network. Although this might be the case I doubt that the network can support itself from “normal” users only.
Token price economics: Yes, there will be a regulation of storage scarcity via the the price of tokens, but that assumes that the price of “storing forever” gets somewhat factored into the token price of the time of upload. This would mean, the token price would rise indefinitely.
In case I have not misunderstand something here, I am quite concerned about this topic. Even if we built the “new internet” I doubt the fundamental rules of economics and incentives for actors within an economy magically change with the introduction of Safe Network.
Sorry if this sounds quite critical. I am on board with the project (invested in MAID and forum lurker) for many years and I would really like the project to succeed.
It follows my post above. This example needs plenty of nodes to get to use the disk in the example.
today I pay $100 for “X” TB
Cost to me is X/500 * 100 per chunk
5 years on the replacement I pay less for 10 * “X” TB and the network will repopulate it
the original “X” TB cost 1/10 * todays cost to store
10 years on the replacement costs less again for 100 * “X” of new disk
the original “X” TB costs 1/100 * todays cost to store.
We see that the total cost to store the original “X” TB is (1 + 1/10 + 1/100 + 1/1000 + … ) * todays cost.
The limit for the real cost approaches a fixed value (1.11111…). Now as long as the algorithm uses this as its basis then the future costs are accounted for.
Finance accounts for future costs and knows how to account for reducing/remain same/increasing future recurring costs.
Since this is a massive reducing cost scenario then its not that much that is needed to add onto todays cost to account for future recurring costs.
Look up the jumping frog puzzle form >>100 years ago. The solution has a limit approaching 2 (1+ 1/2 + 1/4 + 1/8 …) so even if storage media did a massive slow down to double the space every 5 years it still approaches a limit to actual current + future costs.
I get where you’re coming from and your points are totally valid. However, we shouldn’t underestimate how much cheaper storage and bandwidth are getting over time. This trend can really help make the “pay once, store data forever” idea more realistic than it might seem. Plus, technology keeps getting better at saving space and using less power, which could also lower costs.
We should add lower cost of storage and bandwidth to your points. Also the ever increasing energy efficiency of devices storing the data.
Appreciate your reply and understand your view of diminishing costs.
@neo the example of a geometric series converging to a fixed value is valid but if the effect can keep up with network growth and storage needs growth I am not sure.
I can’t see a way how the market can “predict” the future storage cost, storage needs, growth of network and all the relevant parameters at time of upload. Only if the upload cost is “fair” under all these parameters, the infinite stroage model works.
If the increases for storage slows down then for a period the fiat price of the token would reflect that because farmers would leave if they cannot realise enough rewards in fiat.
The network would then see the SNT store cost increase as nodes fill up more each time nodes leave. And fiat price increase when the lower farming rewards overall occurs which lags farmers leaving. Then rewards in SNT bounces back up and farmers are attracted
Then with a higher SNT reward then farmers get more
BUT if we saw a massive drop in storage increase rate then its a case of an update to the algorithm to change the limit value used.
In over 70 years we have not seen a major deviation from the 10 times every (approx) 5 years and current technology, especially SSD 3D, says that it will at least continue for long time, but there is new technology that shows this increasing storage rate is not going to see much change except maybe upwards.
Until then the pay once with 12% needed for future costs is looking secure. Mind you I’d suggest using more than 12% to account for fluctuations we’ve seen from time to time when they come to the limits of magnetic storage techniques, then bounce back to the average after new technology becomes reliable
Sounds good! Have you (or someone else) run a simulation of token economics with an assumption on network growth, storage cost, file growth with rational actors as users/farmer etc.? It would be interesting to see where the price of SNT converges to and how the economics react on disturbances (e.g. many farmers leaving, sudden increase in upload etc.).
I am hoping the upcoming white paper in a few days will address this.
The original had a “S” curve as the growth path which is reasonable based on the history of technology adoptions following the “S” curve. Basically human nature drives the “S” curve for useful technology
I agree with your economic concerns @esquilax. I’m not concerned with the drive storage space as that will certainly continue to grow and keep pace, but I am much more concerned with the network popularity and growth rate.
The fundament of the network doesn’t work if the market continually contracts. For example a burst of early data populates the network for a few years, but then the network really doesn’t ‘catch-on’ with the public and slowly starts to contract … if the tech costs to nodes of storing data doesn’t drop at an equal rate to the decline in storing new data rate (and eventually it wouldn’t), then the cost to store will start moving up (as no way to delete old unused data) … and in a declining network, with a higher and higher cost to store, this would accelerate toward collapse. This is the fundamental problem with making a contract that is forever. Market forces can’t calculate with an infinity sign in the equation.
So all that horror said, let me redeem the network.
As previous mentioned, the network has an upgrade mechanism and this will allow adjustments/alterations to the network in the future. If we get stuck in a collapsing network, then changes will certainly be made to keep it alive, including, in the end, finding some mechanism to allow nodes to delete data.
That’s all worst case. If the network continues to grow, then as others have pointed out, we will be fine.
Reading further into this, I found that the “fundamental economics” question has been discussed previously in this forum with the same concerns like in this thread, notably here here and here.
Sorry for not digging deeper into the older discussion before adding to this thread.
However these discussions were based on older versions of the 0057 RFC and are possibly also based on old implementation concepts which have been changed in the meantime.
I think it would be really great to include a simulation (like the one by @oetyng in the thread above) based on the latest parameters we have learned from the testnets in one of the new whitepapers like @neo suggested. Looking forward to that!
Another part which I’m hoping will be covered well in the whitepaper is the consensus algorithm. To be honest I’ve lost track over the years how this evolved. I know Node Aging is playing a key role within a section to control attack vectors but it seems the new design also requires inter-section consensus. I can still remember the launch of the PARSEC whitepaper and then noticed some discussions around the new Safecoin design including Digital Bearer Certificates, but couldn’t find much resources on these. I’ve read both updates, here, and here. I understand each section kind of acts as a mint for the DBC and these are connected by a DAG.
I’m semi-knowledgeable on DAGs but to me it seems each independent section must have knowledge of the whole DAG which sounds hard to accomplish, as this is probably stored decentralised? The other question is, how is consensus reached between the sections which certificate should be added to the DAG?
When reading about DAGs most people will think of IOTA and that it could not achieve decentralization yet so it might make sense to address this concern and maybe above questions in particular. But maybe my knowledge on this is just too shallow.