Looks good. only need a email and 0.001eth on arb main net. cost me less than $5 to get it all done and more than it should because I did not send enough in the first transaction. story of my life trying to be cheap.
You are just moving something like $1.50 to another address and paying $2 or so in fees.
That does not sound as good when I read it
People have countered them logically in previous topics… maybe you should make a thread about this specific topic, as I think the discussion is dotted around in various places, and it’d be good to have a single place where the main arguments are clear.
While I don’t see your scenario as likely, I also think it’s worth exploring the possibility of this threat, as even if it seems unlikely to me and many others, we don’t want to end up with a data equivalent to UST / Terra Luna (somehow the threat wasn’t obvious to many with that one despite the obvious death-spiral probability).
I agree, this needs a separate topic. Like the one we had for 1 nano for 1TB (careful lads!) because it’s worthy of discussion and grown bigger than the general update comments.
@TylerAbeoJordan you said ‘Well obviously I don’t think it’s too late.’ to redesign for not all data being temporary. I’m sure it isn’t technically speaking but it kind of is if partners have been engaged on the basis that they are buying permanent data.
It is a huge selling point for a couple of reasons:
No ongoing cost - you don’t have to worry about about forgetting to renew.
in a network where deletion of data just isn’t possible there can be no accidental, malicious or code errors that cause deletion of data.
information that nobody owns or is responsible for but someone is willing to pay to upload will be available forever. That was the whole point as I understand it.
There is one thing that make me uneasy: it is much less attractive for saving data that one knows is not needed long term. I have a database that I backup nightly and want to keep for at least 30 days. If I have to do a restore I’ll want the latest backup that worked but I’ll take one a few days back if I have to if there was a corruption. I hope to notice something is wrong in 30 days but I keep 90 just in case. I don’t want to pay permanent data prices for it and I’d feel silly uploading 110MB each day that I pretty much know I will never download. So I’ll just keep uploading it to AWS S3 with a LCM policy that expires it. I’m sure there will be others who don’t use Autonomi for this same reason.
But this is one quibble and not worth complaining about or compromising the ideal.
This one (well the negative of it) is glossed over by lets have all data die off unless renewed.
How many times have people put up a ton of very useful data (data sheets, tutorials, blogs, and so on) and then years later have no clue to the login credentials or even to remember everything that they put up. Most very useful information to some/many, even if its explanations in forums.
Imagine the thousands of future forums that will just rot away because people forget every single post in every single forum they posted in. Imagine all the useful information, how many topics/threads that will become unreadable. Those people providing solutions to some bug/issue on a motherboard solved by just doing one simple things but that post was never renewed because the person moved on, died, forgot etc etc and here some newbie getting a hand me down cannot get it going now. They read through the topic and seeing missed posts and the needed one is gone.
Its not just about being able to delete files, its about the utter loss of information. At least now with the web the data is on drives even if the web page is removed and the owner of the site/host can restore or keep up the site for a long time. But to let the web rot away with 1 year renewal or 2 year or 3. (More than 3 years then why have deleting at all.)
People simply do not make notes of all the posts they make in all the forums, not all the files they uploaded under that account. If people think they will only ever use one account, well think again about how many accounts you’ve had in last 5 years. If not you then just enough people to make the web unusable. Imagine the thousands of forum posts you need to renew, the number of chunks that involves, and who’d sort through all their thousands of posts in forums to work out the few that are the important ones to keep.
Its not just simply saying lets renew or let the file die.
tl;dr
If this was simply a file uploading service like a cloud then yes its workable to renew data or let it die. BUT this is a network designed from the base up to be a perpetual knowledge network, not just perpetual files but all the links and posts and blogs that all go together to make a perpetual knowledge network.
To have a cloud network where its just files going up then make that network, this is not that even if the beginnings of it is until apps change it totally into the new internet
Thanks @Josh, I am grabbing 0.001 every 12 hours thanks to you.
I recommend this route folks.
I’d never done swaps before, or handled all these side-chains, so it has been a maze but somehow I got through and don’t think I lost any crypto. Although it was all so complicated I can’t be sure
Excuse my ignorance here, but do i need to have a coinbase wallet that I allow it to access? What do you need to give coinbase to make one of their wallets.
It sounds like you’re mainly interested in a mechanism for triage. When the system is stressed a clear way to do triage and recover from that stress (by removing the oldest ‘most expired’ data) would be useful for everyone, both nodes and clients.
Triage is an important idea to explore, but I really strongly feel that temporary data is not a good way to implement it.
To over simplify a little, the benefits of temporary data go to the nodes, but the costs go to the clients. There are many reasons that temporary data is not a good idea, but this is probably the main one.
If you want a storage network that doesn’t do permanent data you should probably look elsewhere (there are many options for you). Permanent data is a very core part of this project, and for good reason. But I do agree with you, temporary data is more ‘natural’ and the economics are simpler in many ways. But the value of permanent data is enormous and if this project were to abandon it then it would no longer be the same project.
I’d love to hear more ideas about various ways to implement triage, especially in ways that align the costs/benefits and incentives.
For me, one way to keep permanent/persistent data yet allow “deletion” of old data is to implement archive nodes, so rather than delete its shifting the burden of keeping those chunks to this type of node called archive nodes. In my opinion we cannot have them paid for as some sort of staking or any other similar type of thing. When royalties were a thing then some sort of payment to archive nodes could have been made to make them a lot more viable and people could allocate say a portion of their multi TB NAS to store records.
Obviously there would be needs of many of these as the amount of storage would be immense. But since they’d be storing records that were no longer being handled by normal nodes in any numbers its not like duplicating the storage, but rather the old data and typically much smaller than the current more recent or more used data. Each year the data expands a lot so per 100TB of the data uploaded today it was perhaps 1 TB worth 5 years ago being put on the network.
Archive nodes would be retrieving records for the network and presume that those chunks go back into the normal nodes to be held.
It’d take a bit to work on the flow of records to/from these archive nodes but it would be one way to help
THIS ASSUMES that there is even a problem in the first place
Then I see no difference from current situation, when old data sits on normal nodes together with new data. You bring back the nodes, the data comes back.
But the problem is overprovisioning, when you CANNOT bring back the nodes, which you ran, because you have too little resources.
The obvious answer to what you said relates to people who are not educated enough to know how to handle that.
Thus simpler forms of resource checking when starting nodes is one way to solve this issue to a large enough extent that it won’t bring down the network.
Those who bypass the checks either are smart enough to stop nodes or in low enough numbers.
We have the first network now that checks one of the major resource factors and that is CPU usage.
I don’t think a token price dip would be likely to initiate the problem here, because if the token price dips, it’ll mean for a time uploads get cheaper in fiat terms, and demand for uploads will therfore increase until theres a balance again between supply and demand.
But if large number of nodes start to leave for other reasons (e.g. legal issues for node operators?) before any other price / supply / demand shocks, it could cause problems as;
Many nodes leave for some/any reason
Data from lost nodes is redistributed to remaining nodes
Remaining node fullness increases, which raises store cost
Demand for uploads falls due to increasing cost
Node revenue falls while node overhead rises, causing more nodes to exit, and then loop to 2) for possible death spiral.
I guess 1) could be caused by a sustained reduction in demand to upload new data to the network, e.g. if a competing system offers better / faster / cheaper storage, but as long as Autonomi is competitive, I don’t imagine this will happen… has there ever been a time when cloud storage providers & datacentres have generally had to scale back operations due to a lul in demand for using & storing data on Internet services?
I think this would be a great option, as it’d reduce the overhead on normal nodes, making the above scenario less likely, as it should help keep the network competitive.
It could work nicely if nodes & archive nodes have a market between them; nodes pay archive nodes a small amount to offload old data, which allows the nodes to accept more new data, which they get paid a higher price for.
If something archived becomes too popular again, an archive node could pay to offload it back to standard nodes, if needed.