@janitor - I believe my effort to make my thoughts clear has triggered political feelings in my reader, which has prevented my reader from understanding my position, so let me be less colorful and more boring in my speech patterns.
My position is that old data is often worthless and should not be preserved until the sun burns out. It’s actually worse than worthless, it has negative value, because even if we had infinite storage space available to us, it is still noise that indexers have to index needlessly and harddrive heads have to skip across on their way to something valuable and electricity has to be expended to preserve.
I have read some people who seemed to indicate that SAFE storage would be charged for during a write operation, and after that, the data would be preserved forever. My gut reaction to that is negative, for the reasons stated above.
Possibly this will be the case, but in some cases data not accessed for decades turns out to be valuable. In terms of garbage collection I think the c++ way is good, do not create garbage
Seriously there will be an archiving system to hold chunks in larger containers to reduce admin overhead. If the network ever fills up we can look at removing older data, but at the same time realising space is getting much cheaper and recognising old private data is impossible if it is secured and pretty unique (like many of the great papers of the past) and will look like manually created junk parts. So the issue is possibly non existent, handled by archiving or will require some further actions, like having data verified or shared amongst more than one person (hackers will immediately overcome this and create junk and share it). Like everything there is an answer it wont be obvious, but the right answer will be super simple.
Personally I think this will be a non issue, but we will certainly see as we move along through the years to come.
I disagree with this. In fact I think that the strength of the perpetual data model (at least at a theoretical level, we will see how it works in practice) is that it encourages people to put NEW data on the network which strengthens the safe network as a whole. So if you are valuing the safenetwork as a whole on the one side you have data storage, which is constantly getting cheaper and on the other side you have data, much of which decreases in value (up to a marginal archival point, as there is almost always some floor for the value of data).
So at least in terms of bootstrapping the network, this is an absolutely necessary step and I suspect that encouraging allowing people to constantly PUT new data, will be more important to the overall network health and viability than the cost of maintaining the old data.
Plus we only need preserve all data until SAFE II or better comes out, not until our sun super novas May be a year, 10 years 100 years who knows, but it will all evolve as it should.
Imagine all those romans saying lets torch all that old crud data in that Alexandria library, who will ever want to read that rubbish. I know we can have truly crud data digitally so not 100% fair comparison, but never the less, there are many of scientific papers which now reveal advances that just were not obvious at publication. So care is needed in identifying garbage.
The data capacity follows the Moore’s law so this old information will only take a very small part of the total space.
And, as Irvine says, even though they has not been used in a long time this information can be very valuable to anyone and no one should have the power to make it disappear.
The way to “identify the garbage”, to me, was to have the person storing it continue to pay storage fees. When it was no longer valuable enough to anybody on earth to pay to store, than it is removed.
However, I’m willing to share the dream of perpetual storage. It’s definitely an exciting dream and I want it to succeed.
It would then seem to make sense for people quit comparing SAFE storage costs to google drive / dropbox storage costs, as SAFE should really be a premium. The network is entering into a commitment with no expiration date in return for some up front fee. That fee should reflect the magnitude of that commitment at least to the point that google drive prices are not a fair comparison.
The plan is the fee represents what farmers are prepared to take for it (earn) and as time goes by they must farm above the network average (or are heavily encouraged) and that means over any ‘old’ data. As disk prices go down and capacity up this also provides a balancing mechanism. Its an interesting concept that seems weird at first I cannot think of a good analogy though. IF we remember back only a few years all data could fit on a single computers hard drive today and in 5 years a hard drive could hold all the capacity of all the worlds computers today at the current rate of growth (Eric Shmidt of Google presented this factoid at a conference).
Why would that land be yours? Did you create the 3D space it occupies? Did anyone else before you?
When discussing property rights, it is important to define what property is and how it can belong solely to an individual. If it is just because X got there first and/or Y bought it after, I find the argument lacking - it is just a monopoly maintained through force, preventing others from accessing the resource.
Land ownership is a prickly topic. You cannot exist without a 3D space to dwell in and feed yourself from. Therefore you can argue that land users should owe others to compensate them for their displacement, not the other way around.
To add, I agree that property which is the result of trade or the mixing of labour is a strong argument. However, a location cannot be property on the basis of this argument alone - only the soil and your stuff on it can be.
Probably very OT, but we have to be careful with property definitions.
I rarather like the idea of all data being recorded forever. I find it sad that this and other forums could evaporate in years to come when the host decides to pull the plug. I think it would be great to have an archive which documents the progress of man kind, which can never be erased.
With storage constantly growing, I think there is a strong argument for leaving everything where it was put. If we need to archive it to keep performance swift fair enough though.
I too am greatly in favour of trying to keep all data, and hopefully data storage will keep becoming cheaper and more efficient so we can keep it up. Deduplication should also help a lot.
And one major factor of data space on SAFEnet will be Safecoin price which is probably going to be much higher then needed to incentivize enough resources for the network.
@Marin Safecoin price has little impact on this because the rewards are adjusted to keep farming incentives just ahead of demand, which greatly reduces sensitivity to fiat exchange rate.
yeah, that won’t be possible. And also it’s impractical.
But we’ll see I guess…
I definitely plan on finishing a proposal I’ve started about coin issuance because current plans are in my view lacking but alas I’m a lazy **** so it takes longer then expected (I also blame the holidays )
And also since I’m no expert so I have to research some stuff and talk to people who actually are experts and have experience, luckily Bitcoin ensured that they are easy to come by…
Talking about value stabilization mechanisms…don’t do this! One of the most hare-brained unworkable schemes I’ve ever come across.
This is a longish article but interesting and cautionary:
Maidsafe mentioned in comments though (positively) …so word spreading
What are your arguments for that position? The current plan as laid out by David Irvine is
It is definitely possible, and I’m also convinced that it’s actually desirable. It’ll cause SafeCoin issuance to follow the network’s growth in the early days, preventing excessive wealth concentration by the early adopters, and it’ll keep the network storage capacity growing steadily.
I created my own invite gift http://1drv.ms/1IxGuOE during Christmas called Travis Trust Token Pre-SafeCoin (TTTSAFE) via Counterparty; http://counterpartyexplorer.com/assetInfo/TTTSAFE . I created 100,000 so I could back up each TTTSAFE created with a Safe Coin once the network goes live.
I printed on some nice certificate looking pieces of paper ( Given to family and friends) with a unique secret key/QR so they can later send me the TTTSAFE back with their created SAFE Coin wallet address and redeem for hard Safe Coin. That way when half of them lose the certificates it will be no big deal.
My goal is to give away at least 10,000 Safe Coins to people I know and I’m close with. It is a win win for me. If it gains a lot of value then I won’t be the only wealthy person I know (They also will be less likely to get pissed or ask for money). If Safe Coin doesn’t do well at least I have 70 to 100 of my closest friends and family on the network to help build it into what the purpose of my investment is intended for; to be a part of building a future I can be proud of to pass down to our children.
If anything, MaidSafe is actually a solution to that, since it’s so efficient. Right now there are lots of people with free space that is unused while others have a shortage, and more importantly, there is a ton of duplication of data. MaidSafe’s deduplication could free up so much space worldwide.