I think it would be easier and more efficient to discuss this issue by first defining and ideally agreeing on the terms forever and perpetual.
Saying nothing lasts forever is an obvious truth (though not in every context). However, nobody should expect the network to last forever.
So it’s moot whether arguments about this area can mean anything until the terminology is clearer.
Personally I think the point we can most easily agree on is that there are risks, as well as enormous benefits, from attempting to store data for a one-off fee (which is really what I take perpetual storage to mean rather than storage that lasts forever).
I think most here think the risk is worth taking, not just because of the benefits, but also because without this, the network will lose its most unique and valuable reason for being built in the first place. I think it’s probably the single most important feature David sought to achieve.
Given all that, the main value from the conversations is probably to identify risks and discuss mitigations.
BTW, I do not think a collapse will be the end of this. It may be the end of Autonomi, and of most value in the token, but even that is not certain IMO.
So I don’t see this as a bad risk, but an extremely important venture. Call it a moonshot if you like ?
Yeah, I agree with this pov too. It’s a key and unique feature. Very much a USP for the network.
I’d be interested in exploring how the network can gracefully decay if caught in a doom loop though. Unwinding in the least damaging way gives the best chance of recovery.
wonder where all the freenetproject experienced and aquainted users are, there are quite many similar outcomes of maidsafe/safenet/autonomi that have come into existence since its conception and during its evolution. my understanding of the basic problem of near eternal data storage was always that someone needs to maintain the data valuable to them with something as showing interest and demand in those chunks (and the files and bigger objects they constitute). when interest dies down and nobody maintains and cares for the chunks freenetproject dismisses and decays those blocks and they fall out from the distributed datastore thus the whole network. pay once and nearly never come back and not anyone else showing interest doesnt work as far as my experience. I always wanted the maidsafe perpetual concept be something like, i bring in X terabytes or whatever storage amount, only be allowed to use a little fraction of that storage amount on the network, constantly maintain my interest at least in my own stored objects on the network and offer the rest of the datastore capacity to others in mutual goal and common good. the nodes and account of the network needs to keep track of the chunks its caring about and constantly (regularly) maintain them in a way, that they dont fall out and decay. all the coins and moneys, monetary reality of the outer real world and payment stuff and ideas that came along with the whole maidsafe lifecycle this far didnt really help in my understanding, at least thats my gut feeling, i kind of see complexity mostly, after so many years, maidsafe not being my first major interest in distributed data architectures.
For that scenario the specific level isn’t important, if it pushes the price high enough for a reduction in upload demand that means more nodes exit, the spiral could continue.
You might find that a doubling the price to store data causes more than a halving in demand to store data. In a shrinking network, this would mean earnings to nodes could drop dramatically, at the same time as costs growing as nodes exit meaning to stay in the game, remaining nodes need to add storage while receiving less compensation.
I agree that at some point this won’t matter, e.g. if the network becomes obsolete for some reason. But, the situation could be very serious if it were to occur ahead of this time, so mitigations to allow a ‘way out’ of such spirals that are short lived would build network resilience, and also make the network’s potential ‘end of life’ smoother for users.
I can try to look up my previous explanation, but @neo covered the gist of my thoughts on the matter.
I don’t believe I’ve claimed it would be either simple or easy. My thinking is that it’s not an intractable problem and that could be done without any major reworking/forking of the network in the near future (within the first couple of years of launch) if the team chooses to do so.
I wonder how data can be stored for free permanently.
This means time is involved and data will be accumulating on the nodes. You need to buy new computers/storage every so many year and pay for the energy and bandwidth that will be used when the file is accessed/downloaded.
It may have to do with Murphy’s Law, but still, it will forever cost money if you keep the files permanently online.
Maybe I am missing something, please prove me wrong.
Maybe that nodes do not themselves necessarily store the chunks it has today “forever”. Nodes only need to store the records they are currently responsible for. As nodes join this set of records it is responsible for changes. What the network will be looking for is the increasing adoption. Rather like all useful technology where typically a sigmoid curve models the adoption rates. In this case as computers increase in use cases and adoption themselves thus the curve for Autonomi will potentially extend beyond the life of Autonomi as envisioned today, before full adoption one expects a new version (concept) of the network will be introduced with a different set of parameters.
Also remember on average the drives are replaced every (approx) 5 years and by then the drives are like 4 to 10 times larger, same power etc. No need to do anything special for “use what you have” people which is the design of the network.
I was in favor of it in theory - that’s one of the reasons I become involved. After many years of contemplating it though I realized that it may not work out well in practice and could risk the very existence of the network itself - as I have explained in this thread.
The least I’m hoping for is that time-based data can be added as an alternative option to expand the networks potential service offering and to help alleviate the threat of a cascade collapse.
could risk the very existence of the network itself
There is no shortage of temporary data storage options, many of which are free. Without permanent strorage, there wont be a network because that’s its “killer app”.
The least I’m hoping for is that time-based data can be added as an alternative option
Not only has this been discussed before, it look like it’s already a feature.
I’m unclear what that feature is … I suspect it may just be local shared storage, so not capable of being persistent if offline. I would like to know the specifics though – @dirvine if you catch a moment David, would you please briefly elaborate on the utility of this feature? Cheers
It has, but I’ve only recently (past few months) come up with the concept of the cascade collapse. So far as I know that possibility hadn’t been raised before - which is why I’ve been working to surface this issue.
This site puts Arbitrum at a competitive 0.09cents.
Many of the others in that chart you gave are not L2’s for Eth - in fact, not sure that any of them are.
Important for the community to use Eth L2 as we don’t have our own exchange and Eth defi is the most popular. Solana would perhaps be a viable alternative for us, but most of us are already familiar with Eth, so would be a jump for us to convert. Technically would perhaps be harder to convert existing eMAID over to the new token as well if we had to switch to a new blockchain. IDK that for sure, just a gut feel.
Ideally we’d have our own exchange on our own network and I believe that will come in the future one way or another - at which point we won’t need any of these defi platforms for our token - which can be fully native token at that point.
I’ve been told that native is coming 18 months after launch … of course who knows, but not a big issue I think to put up with slightly higher fees on Arbitrum L2 versus further delays in attempting to convert everyone over to Solana or similar.
I appreciate your comments, both here and in the thread you made in marketing.
I feel like theres obvious value in hosting unencrypted files (internet archive, bittorent, websites, etc) where the decision on if they are valuable enough to stick around can be decided rationally by the users. But for private encrypted files this is impossible to tell, and also the social function is more vauge. Why would we need a private file to be available forever, especially once the keys to decrypt it are gone.
And re: your other thread,
Is the marketing angle that this is to store all your private files, durably in case of hard drive failure, and permit secure sharing, all without trusting a third party? I think this is a valuable use case but the lack of temporary files or deletion rules it out.
Is the marketing angle to be a store of interesting information for the world? If so, how is it better than bittorrent, or IPFS, or Dat/Hypercore?
Is the goal to share encrypted files between a small number of people? If so, why not email them, or otherwise send it directly? That’s already permanent until you lose your hard drive and when you do you can ask for it back?
I’ve read the docs and whitepaper and I’m still left somewhat confused what this project is for, and share your concerns that storing large amounts of data indefinitely will eventually lead to collapse
I feel like, as a file store, the price should match the alternative, that being Amazon S3.
In that, there’s fees to upload and download (bandwidth), fees to store (hard drives and maintenance), fees to move between storage types (archival, hot storage, etc).
Ignoring any of those means that the network can be stressed for ‘free’. What’s the plan for DDOS resistance if all reads are free? Bittorrent has the concept of reciprocity, especially for private trackers.
And similarly- is there fees for better speeds? Or is everyone incentivised to use the slowest, cheapest storage possible?
I can’t see the project ever adopting it but I feel like charging for reads based on the time since the last read would be a neat system.
It combines storage and reading into the same price so there’s no ongoing costs. Old files can get dropped like junk debt when operators feel like there’s not likely to be another read again (or stick around in slower storage for the hope of a big payday)
You’d have to build the network for different tiers of anything, speed, time chunks survive, etc
As to private files, how can the network know if two people uploaded the same “private” file. For all anyone knows that “private” file maybe known to 2 or more people and they uploaded it. Whistle Blower uploads a company document for safe keeping “just in case” Two years later the person seeing events happening decides to delete it. But wait another whistle blower uploaded it and with dedup it is still the one copy on Autonomi. Autonomi does not keep tabs on who uploaded what so how could deletion be allowed as it allows deleting other people’s files and/or chunks. The white paper shows how the network will survive without deleting, which many who have been in the industry for decades already knew.
There is in my opinion a place for temp data, but it’d have to be a special data type with owner embedded so that no issues can arise in future.
The chunks are currently held by up to 70 nodes and which nodes depends on each individual chunk. Basically the DDOS is on one chunk or a per chunk basis. Becomes a case of diminising returns for the attacker. DDOS is not free for the attacker, they have to pay $$$ for the bot farm.
Yes, there are. There are fees for that apple you buy at the markets, but you do not give the grocer 50 cents for the apple and 5 cents for the rent they pay and 1 cent for electricity and 1 cent for cleaning the shop front, etc.
Its to be covered by the cost to store the record.
And seeing as the network is specifically designed for “Use what you have” then most of these overheads are already being paid for by the person since they are using the computer for their stuff anyhow. Yes a node runner will have to be mindful if their internet plan has a quota, but that is becoming less of a problem.
If anyone wants to run special equipment or run in a datacentre then they will have to do the sums to see if its worth it to them. The network is not going to give them extra for their increased overheads and for effectively creating a centralising force. Its centralising because if all nodes were in datacentres then whole sections of the network can be cutoff by the decision of one person in management of the datacentre.