The network cannot work based on intentions or declarations from the vaults. This is a basic design from a reliable autonomous network.
The use of sacrificial space have several advantages. Give a good indicator of the storage supply, create a buffer space available immediately in critical situation and makes it difficult, for someone with huge resources, to influence the farming rate for their benefit.
Thanks @digipl, that was useful reading. @neo appears to think that sacrificial data is no longer intended to be used (it doesnāt seem like a clean efficient solution to the problem anyway).
So it appears that the economics of the network are far from being set in stone. Looking forward to seeing how this plays out.
Either way, Iāll throw my 2c in on how I think the show should be run.
For the moment they have gone away from sacrificial chunks as a method, but I am sure they will have to have some form that implements similar or come back to it.
I seem to remember reading that a new vault will be told to store data in all its space with specific data and through crypto challenge where the vault has to retrieve some/all and create a hash in a certain way to prove it has that amount of space. This at least shows the vault initially can store that amount and isnāt overstating things. This would be important to determine available space. The the vault is penalised as normal if it cannot store that amount. Remember vault is not paid according to reported size but only when it retrieves chunks.
Of course you can adjust but it probably requires a vault reset and lose some of its node āageā. In effect this gives what you suggest (maybe longer). This works very much against quick changes.
Then again you could simply run up a second vault when you want to increase space significantly instead of increasing the first vault. Potentially you could earn more too.
But if your vault is not anywhere full then increasing space is not going to benefit you till the original space is used up.
EDIT:
I agree that sacrificial chunks was an easy way. I guess they are trying to get away from network traffic/work where extra chunks are stored and moving away from it by going to a vault challenge to determine maximum space in the vault. That way a simple command to the vault has it fill up the vault with data calculated from the command then get the vault to run a crypto algo over the data and return a result. This way the total usable space of the vault can be proved. And if later found false then that vault is deemed bad and not used.
NOW the question is āWhat will be used in the end?ā Iād say sacrificial chunks has a very good chance of āwinningā out
EDIT 2: I cannot find anything about what I said above so feel free to take it as fantasy. The RFC still says sacrificial chunks and I did find from last year (old) a comment by David about using data chains for this but cannot see how this would actually solve the free space determination.
How about having the network not have to try determine the amount of free space, and instead attempt to fill the entire available storage available with duplicated data. And then the network farming intensives would target a specific amount of duplication. When new data is put to the network, vaults which are assigned this new data would discard some other data which is further away to that vault in xor distance, and therefore making a small reduction in the amount of duplication.
This is the principle of sacrificial chunks. The difference being that you donāt have to completely fill vaults, just a couple extra chunks above max copies of each chunk is enough
The economics of Safe could greatly impact the degree of its success.
As the technical hurdles get ever smaller, more focus should shift to making sure the economic model is good, and also sufficiently flexibile to be tweaked post-launch if needed.
So, to earn peak, Iād profit from having multiple accounts/VMās with relatively small vaults, instead of one larger account with tons of storage that would take ages to hit that 30% trigger?
Yes, SAFE isnāt about the more storage you provide the more you make. Itās more like: the more Vaults you run the more you make. And the longer they are online the more you make (node ageing). But things could change, if more and more people start Vaults because of good prices, the lower the Farming Reward becomes. What would you do in that situation? Stay online without making a profit? Or go offline and start all over again with a node age of 1 after a full restart?? Lot of dynamics involved here.
Iād stay online without making a profit, knowing node age of 1 would earn me less than a node age of 100+. Unless thereās other forces at play.
It sure sounds like virtualised instances are going to be key to earnings, so that sharding one operating system into 10, or 100 or 1000 with smallish vaults each, will be the way to maximise earnings, which will be the primary goal of those trying to monetise their involvement here.
Huge bandwidth, uptime and storage, but with the storage split into many instances of good uptime and moderate size.
I suspect we will only really know when vaults with safe coin are live. I am sure the algorithm will need refining over time to balance out incentives.
That 30% @anon40790172 is only a approximation from the RFC algo.
That 30% is network wide, not your vault.
You are only paid on the chunks you deliver using what some refer to as a ālotteryā, but its a deterministic mathematical algo
True, but there are limits of course. Bandwidth being one, cpu usage is another when you have a lot of vaults, especially when many node have to do a lot of work momentarily.
Also remember that a vault is also a node, so each one will be caching content. Each one will be routing chunks as they travel through the network, so the bandwidth width from the vault itself is a small part of the total bandwidth that a vault===node will be doing.
So there will be a sweet point for the right number of nodes.
Also a large vault will eventually be serving up approx as much chunks as many small vaults of equivalent size.
This is a really interesting and difficult question which has been on my mind a lot.
While I donāt think we can trust a document from 2015, itās the best we have just now; RFC-0012 Safecoin Implementation says the allocation of new safecoin depends on the ratio of Sacrificial Chunks to Primary Chunks.
āwe want the [farm] rate to increase as we lose sacrificial chunksā
Sacrificial Chunks were used as a measure of spare space (ie supply), but since sacrificial chunks are no longer in use rfc0012 is not currently usable. The intention still applies, just not the exact design.
A way to measure spare space (supply) is required for the network to ābalanceā supply and demand. Currently there is no way to measure supply.
We construct functions that provably require more time and/or space to invert.
The idea is to use disk space rather than computation as the main resource for mining.
a cheating prover needs Ī(N) space or time after the challenge is known to make a verifier accept.
V challenges P to prove it stored F. The security requirement states that a cheating prover P* who only stores a file F* of size significantly smaller than N either fails to make V accept, or must invest a significant amount of computation, ideally close to Pās cost during initialization
This is a way for a vault to prove they have a certain amount of spare space which can be easily verified by the network. Itās like proving a certain amount of hashing has been done when matching a specified prefix (ie PoW).
This is really fascinating since it lets the network measure the supply of resources, which is critical in balancing supply with demand. This allows the economic model of safecoin to function as intendedā¦
Iām still working through the details but wanted to put it out there so others can investigate and comment on the potential as a measure of supply.
Presumably people will say they have more space than they do in order to throttle farming rewards, potentially causing existing safecoins to be worth relatively more. This plays into the hands of those with lots of coins already.
For those without many coins, presumably they will want to increase the reward output. They have little coinage to inflate, so diluting the supply wouldnāt hurt them.
So, do we have opposing factors here already?
Thinking about the incentives, the more supply is constrained, the more the price should increase, the more coin poor farmers will want increased rewards. Moreover, coin poor farmers may have just offloaded safecoin specifically before switching strategy.
Of course, there will be many in the middle who are just honest and not trying to game the system. Presumably they will be reporting space correctly, which may give an anchor to the supply algorithm.
Maybe something more robust would be ideal, but if there is some sort of damping in place, wild swings would seem less likely.