By target size I mean how we define ‘normal’ in the phrase ‘return to normal’.
By minimum size I understand that to mean the way vaults may be excluded / killed to maintain a degree of quality.
The two are quite similar in operation but I feel they have important differences in the way of thinking about the algorithm.
Setting a minimum implies continual increase (since below minimum is ‘undesirable’). Target implies variation is to be expected but gradually corrected. Minimum requires us to say what we don’t want, target requires us to consider what we do want.
Having typed a bunch of thought experiments trying to break it, I think vote for minimum works fine. I kept some that are useful provocations and still a bit unclear to me:
There’s no consequence for false voting. If 51% of vaults vote for 100 PB minimum, the only effect is that some unaware newcomers get excluded because they voted sensibly. Once they wise-up to the trick they also start voting 100 PB just so they can be included. There’s no value to the voting mechanic any more.
People will want to vote extremely high because it’s a way to (temporarily) exclude newcomers unaware of the trick and (temporarily) retain more of the action to themselves.
If 51% of vaults vote 1 MB minimum (and their section is already storing 100 GB) what happens? Presumably nothing since anyone could pass the minimum. It effectively removes the purpose of the vote and turns it into busy work.
I don’t think vault size can directly be controlled / measured, but it can be indirectly controlled by the rate that vaults join / depart since this determines chunk distribution. Should votes be for size or should they be for rules of membership? Is it the same thing? Which way of thinking about it is most useful for a) doing design b) doing implementation? Probably the two ideas of size and membership are both controlled by one mechanism. But it’s hard to talk about if it’s still in a 2 x 2 matrix of ideas x mechanisms. I guess I haven’t collapsed the 2x2 matrix into a single mechanism in my mind yet, even though I accept it probably will do so eventually.
Related to votes, monero has a vote mechanism for block size. Great explanation of the mechanism and an intense research document about the mechanism.
This is an interesting precedent to look into. Do most miners utilize the vote or just go with the non-vote default? Why? One of the main mechanisms is there’s a logical tradeoff built in - larger blocks trade off a known reduction in block reward now for an unknown increase in transaction fees later. And of course also the underlying principle that larger blocks are ultimately an exclusionary force.
I really like voting mechanics as a general idea. But should (and how would) the money=vote vs person=vote dynamic be managed?
Is this still vulnerable to time inversions? I guess all previous blocks is intended to prevent this.
The generator must be able to operate on large data, since processing 1GB, then 1.001GB, then 1.002GB is not so bad but processing 500GB, then 500.001GB, then 500.002GB etc could be quite a labor when it’s only intended to prove space not computation.
Here’s a quick explanation of the time inversion attack because it’s an elegant thing and I think people would enjoy understanding it without all the maths from the chia papers: I’m asked to prove I can store 10 chunks of data. The tester gets me to store a seed chunk then generate the next 10 chunks after it, where each new chunk depends on the prior chunk(s). But I cheat and after I store the 5th chunk I throw away chunks 2, 3 and 4. I only store chunks 1 and 5. If I’m asked to prove I have 10 chunks of storage by providing random chunk number 7, I use the value of chunk 5 + some calculation to derive chunk 7. So instead of storing 10 chunks of data I only stored 2 chunks plus used some calculation time. All proof of storage mechanics can be reduced by some factor depending on the derivation scheme, the computational power available to the attacker, and the timeout for the proof. Claiming I stored 100 GB might actually mean I stored 1 GB with some powerful computation to fill in the gaps. The tester can never know the truth since it all happens on the prover machine.
And ‘computation’ might also be replaced with ‘bandwidth’ since the prover can ask a friend storing the same value to provide it. This is where sacrificial chunks face a (mild) challenge since the prover can possibly use the primary chunk to pretend they have the sacrificial chunk when really they don’t.