[quote=“jreighley, post:40, topic:5544, full:true”]People will want to use SAFE because they want the security from it. They want their files to be backed up. They want their files to be in a place where hackers cannot get to them. They want confidence that they can reach their files from anywhere … It’s a Cooperative – It should break even in the end.
[/quote]
Hey mate, you’re reading my mind. That is exactly how I want it to be. But for it to become reality I feel it necessary to consider all possible cases of system abuse first.
Indeed my models above are for relationship with external clients. If I manage to build a model for internal clients I’ll post it here as well.
Right now I can think of one scenario of internal abuse:
I’m a mad photographer
I farm, I abide by the rules, do anything it takes to earn an amout of safecoins
once I have them, I immediately use them to upload lots of rubbish into the network
then destroy my vault and never farm again
I think there may be a problem here. Whatever good I did for the network it was for a limited time. However the oblication that the network has taken - to keep my files is forever… There is an imbalance here.
My gut feeling is that it needs to be apples for apples - you keep my files for a year - I keep yours. Making obligations accross time - like I keep a file this year you keep mine next year already looks a bit dangerous to me (intuitively). And an eternal obligation looks feels like a recipe for trouble.
Today I’ve srated putting into formulas what exactly feels wrong to me about such an imbalance.
You turn off you computer and everyone else gets paid just a bit more to make up for it. It isn’t like storing 3 or 4 more chunks is imposing a massive burden on them - their machines are already on – They are already connected. They just get paid more for what would have been idle time anyway.
Then the next rubbish dropper has to pay higher rates.
The cost of storage ought to be what it will cost the network to store your data forever. I don’t think that needs to be too cheap. I am sure there is a formula that will work out just fine and dandy… The price need to be high enough to prevent abuse, but what you are suggesting isn’t really abuse. If you earned enough to store it, it’s stored.
Choosing to farm is like entering a lottery pool. If you play you will win, if you don’t you won’t…
I read somewhere that there will be a delete feature, and/or limited time exposure.
In private storage, it would make sense to have delete feature.
Would it feasible to delete your data, and collect partition of the coin back? Like in valve dota 2, you could delete your cosmic item. In return, you may get nothing, treasure, item, or rares. A lot of replicates oversaturate the market, it causes users to delete them, and get a return item for none, equal or greater. This would be awesome idea for safenet. But again this might lead to abuse.
The network doesn’t know what files are yours. It doesn’t know if only one person stored it or 100,000 people stored it. So if you want it deleted, It cannot delete it, lest it deletes somebody else’s copy as well.
The same file will have the same chunks and the same hashes and therefore they would be routed to the same vaults.
I think some of the structured data RFC discuss deletion of structured data – but not the typical files.
I do believe there has been some discussion of archive vaults that will hold the data that is rarely used… But it’s all discussion, I think.
Working with constants is not SAFE economics. So any answer calculated is not representative. And when you set the conditions for an outcome then its no surprise that said outcome is calculated.
I think you already know the answer, as you’ve said it already.
There will always be external conditions that will make SAFE fail. Picking an obvious one like economic collapse which can in of itself destroy the ability to use all higher tech for the masses and limit it to the (new?) elite. In this case SAFE fails because it relies on the masses who now cannot supply the resources. It is very difficult to design a system for the masses using tech that will survive in its optimal state. If the internet connectivity is still usable by the masses after such events then SAFE will likely survive and work satisfactory. PUT cost would be cheap if enough people can be farmers, and expensive if not. The rich then are storing data and the farmers gain some much needed income. Who knows though, its guess work because there is no definition of how bad the collapse you suggest.
Unfortunately if you make your own equations of how your model works then its going to be contrary to SAFE models and up to now it was unclear that you were proposing a completely separate model and asking what-if.
It works like this: Think of your data, once encrypted and uploaded, not really being data anymore (when is data not data…). All it is are chunks of encrypted something.
Those chunks can be de-duplicated so one chunk in your whole data file might be a chunk in someone else’s different data file. The network does not and cannot tell which pieces were de-duped. Permanently removing one of your files from the network is therefore impossible to implement securely.
In this case, without being able to tell the network to “drop” a chunk or chunks, there’s no way to “help” the network by “freeing up space” and therefore no way to deserve any safecoins.
However there is the ability to delete a file. But not in the sense that you’re thinking of.
Again, when you upload your data to the network, it’s no longer data, it’s randomness. It’s noise. It’s chunks. Your data is no longer data.
For private data, the only thing that exists that can pull all of those random chunks from the network and piece them together directly is the info in your datamap. If you delete the entry from your datamap poof! your data is gone…in the sense that it’s irretrievable.
Notice that the data is still being stored on the network, but it cannot be accessed as the whole file. This is what it means to “delete” your data. Nothing more, nothing less. (IIRC)
Note1: Public data is a bit more complicated as someone could potentially store the chunk/file info in their datamap for access later. In that sense, you may have deleted the data so that you cannot access it, but they still can. EDIT: Private Shares have this same attribute.
Note2: The only way to re-obtain that data once deleted is to re-submit it to the network. There is the chance of possibly doing this offline and computing the individual data chunks and reconstructing an entry to go in your datamap. In this case you would already have the data locally.
It’s not the “pay” part that is the problem. It’s the “per year per Gb” part. As I said, it requires a better understanding of how data is stored and retrieved on the network to see why what you are proposing is unworkable.
When you store a file to the network, it is self-encrypted and sent out in “meaningless” (without the data map) chunks. Monitoring account managers check that you have paid for resources to store the file and authorize storage. Then those chunks go out and are stored with NO association or link to you or your account. To maintain any such link would be probably several layers of complexity and burden on the system, and expose a bunch of security problems.
Anyone can retrieve any data stored on request. The point is that without the data map, no one knows the address of the data or the means to put it together with the other file chunks, or decrypt the file.
The cost to store data will not be super high, but it will be enough to discourage spam.
Believe me, I have a sense that there are other questions that could be asked about how this all balances and works out. But the time storage limit isn’t practical as a means of averting the problem you’re looking at. It, in itself, would add a potentially bigger burden and lose the security-simplicity of the encryption/storage model.
I’m exhilarated however that we’re out of the “impossible” territory!
It’s now difficult and impractical but not impossible
Good! Progress!
Indeed, what I suggested was merely a sketch of a possible design. If it was to be considered seriously a lot more detail would need to be added and pros and cons would need to carefully weighted. My personal feeling is that this path may lead to system a lot more adaptive and resiliant than the one current planned.
Yes we’d get more complexity in code, but possibly less complexity in the economy. Which might be a good trade-off: code might be easier to fix than the economy.
also dont forget to compare to services say for log term mainly write once read only services such as amazon glacier and their market pricing
Also maidsafe as an open source solution could always be ran by the users directly or will most likely fork into whatever direction that suits its users best and probably fragment in the long run into people who host their files each other in like a pool of common mindset (e.g. also see the tahoe-lafs and the various grids (groups) that eventually emerged, TestGrid – Tahoe-LAFS )
Great! I suggest you get busy and become a core developer. Show’em how it’s done.
[Edit:] Sorry. That sounds even snarkier than I meant it.
But no, we are still at “not possible.” Aspects of what you propose have been considered and discussed at length. They have not been adopted because it is not possible to implement them without disrupting the super high security and anonymity model of the network. The complexity factor is part of that.
Could the network reinburst people who delete their files from the network in an amount equal to the amount of safecoin the user originally paid divided by the number of users that own the data, for want of a better word? That way, as the value of safecoin increases, people would be incentivized to delete data that is unpopular/useless from the network.
If you check the c++ codebase to last Feb or so, this is what happened. It was a huge security hole though and very (very) difficult as it meant the client managers have to log what you store (imagine the video with 1 billion users etc.). This puts a huge strain on the network (account info/ state) and introduces many edge effects we could not calm down.
Not saying it can never happen, but even super efficient bloom filters or two way bloom filters (see Gavin Andresseons attempt here) are not effective enough to reduce this state, even if it can be obfuscated (which it can to a degree).
So delete becomes a big problem with de duplicated shared data. certain unique data could hold a signature (32 bytes) though and we are looking in that direction, but again many edge affects and hacker/vandalism opportunities there as well. So great care and attention is required from many angles. It’s not forgotten, we just are not smart enough just yet
The network measures supply and demand, using spare capacity. To be clear it saves up to 50% more copies than it needs and uses the max/min values of these sacrificial copies as a measure of supply/demand so yes is the direct answer, but only proven used spare capacity if that makes sense. It does not pay for any promise of resources, only proven resources.