Proof Of Storage prototype

Yes, true. Reasonable limits could be imposed.

Perhaps new vaults could present a bandwidth capacity/rate statement to the section. The section would then throttle automated chunk interrogation to the vault based on the reported limits.

1 Like

I’m not sure we have a contradiction here. We can talk about these 3 different kinds of “content”:

  • data that the section stores,
  • data that the section is caching,
  • proof of free space.

Caches are (or can take the place of) the sacrificial chunks, and they can count towards either of the free space or the stored data, depending on what we’re looking at. But I may be missing something so let me know.

No. Tests shouldn’t ask for entire proofs (that would be wasteful) only a few bytes from here and there. This also means that, after a vault first initialized its proof store, it could store away a number of samples from the high blocks so it could use them to test other vaults even when its own blocks are overwritten. In effect, a vault can test the free space in other vaults up to the total space of its own.

If the cache is stored on disk that is.

Well, yes. It depends on how much we cache and for how long. I now realize sacrificial chunks were more like an on-disk cache and “caching” was referring to a really short in-memory store.

It certainly makes sense to store away immutable data that is requested repeatedly, so that makes more sense for performance.

2 Likes

What happens when trying to cheat this algorithm?

The cheater proves 1 GB storage (1 MB proof size with 1024 proofs) but only really consumes half that.

Seed: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Proof Size: 1048576
Total proofs: 1024
Sign difficulty: 1
Hash difficulty: 1
Scavenge depth: 10

The cheating technique I used was to measure how many times each proof was scavenged, then discard 50% of the proofs starting with the least dependent proofs.

Not surprisingly, chunk 0 was the most heavily depended on, being used as a first-depth dependency 135 times.

The median dependency was 7 (half the proofs were reused less than 7 times, half were reused more than 7 times).

After discarding half the proofs, the cheater was asked to provide the data for a particular proof. The cheater might need to recreate some of the missing dependencies (possibly nested).

The most difficult proof to provide needed to recreate 236 missing proofs which took 4.1 seconds.

90% of proofs could be recreated in less than 1.15 seconds.

66% of proofs could be recreated in less than 0.12 seconds.

There’s a few intuitive observations (which the chart makes even clearer).

Proofs from the early portion of the chart are not as useful to test as later proofs since they take less effort to recreate (have less dependencies).

The degree of dependency on each proof is not equal, by nature of only looking backward for dependencies, not forward. Earlier proofs are more likely to be a dependency.

However this doesn’t mean tests should only come from the last x% of all proofs, since that would mean definitely the bottom 1-x% could be discarded!

My rough guess for how to tune this algorithm would be:

  • depth should be
    • high enough that it reduces the opportunity to discard and
    • low enough that it doesn’t create significant computation overhead (that’s the job of the difficulty settings)
  • difficulty should be
    • high enough to make recreation longer than network transfer time and
    • low enough to make initial creation simple and not restrict participation to only high performance vaults.

I think a better understanding of the relationship between depth and difficulty would be helpful.

7 Likes

My initial question is if the size was above 10GB, above 100GB, above 1TB. What then? How long to recreate.

My thought is to cheat with saying you have 1GB doesn’t seem worth it, does it. The cheat probably has more main memory free than the 1GB.

How long on a low end device that reasonably would only have 1GB of storage (assuming no USB mem stick or hard drive)

Also what happens if you implement my idea that any proof test should see the section requesting 3 spread out proofs per test. That way recalculations are increased and for a non-cheating vault the answer is given back almost as quick as asking for 1 proof, whereas for a cheat that recreates it likely will be 2 or 3 times.

2 Likes

1 GB

Took 22 seconds (0m22s) for initial (honest) creation.

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 236 missing dependencies, 4.1 seconds to recreate

90% of proofs had 66 or less missing dependencies, 1.15 seconds to recreate

66% of proofs had 4 or less missing depedencies, 0.12 seconds to recreate

10 GB

Took 183 seconds (3m3s) for initial (honest) creation.

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 788 missing dependencies, 13.7 seconds to recreate

90% of proofs had 84 or less missing dependencies, 1.48 seconds to recreate

66% of proofs had 5 or less missing depedencies, 0.086 seconds to recreate

100 GB

Took 2452 seconds (40m52s) for initial (honest) creation.

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 1802 missing dependencies, 35.5 seconds to recreate

90% of proofs had 78 or less missing dependencies, 1.5 seconds to recreate

66% of proofs had 5 or less missing depedencies, 0.093 seconds to recreate

Sure, I agree it’s not worth cheating 1 GB on face value, but the point is to get a sense of the algorithm characteristics. I’ve sorta assumed the magnitudes scale linearly (which in theory it should and the figures above indicate it does). So cheating 1 GB can be pretty accurately extrapolated to any size. I can see how that assumption is not at all obvious from the previous posts though.

I’ll see if I can crack out my raspberry pi or pine64 and do a test. Ideally it shouldn’t be too far away from a desktop computer, since it’s meant to test space rather than cpu, right?!

Good idea, it would need to be tested to know how much it improves things, but I imagine it serves a similar purpose to the ‘depth’ parameter. Except depth works at the test level rather than the generation level so the test idea is a good extra thing to have. I’ve mainly focused on the generation part of the algorithm rather than the testing part, but you make a good point that the test design can also be used to improve robustness against cheaters. Changing the test parameters is also a lot more flexible than changing the underlying algorithm (which would a full regeneration for every vault) so that’s another advantage.


I also came across Burst today which uses Proof-of-Capacity (prior thread). Some info about how they create proofs and how proofs are used for mining. There’s a fast POC miner written in rust called scavenger! Might have to try it out. They currently have about 250 PB on their network (source). Peak was almost 400 PB. Mining started in Sep 2014 at 5 TB and within two months grew to about 5 PB where it stayed for about 2 years. Hard to imagine that capacity staying with burst instead of SAFE.

I’d say POC in the current form is not a good algorithm for SAFE because it relies on shabal hash which would be a waste; the algo may as well use the crypto primitives needed for regular network activity so any hardware optimizations have multiple benefits. But I’m going to look at burst and POC in more detail because it’s out in the wild and seems to be proven to work. Might be easy to change the hash algo and keep the rest of the process if it’s robust.

5 Likes

It is indeed a good idea to first look thoroughly at what others have already done / ‘proven’ in this space :slight_smile:

2 Likes

Interesting that the real effect of size is on only the top hardest few percent of proofs to recreate, yet by the time you got down to 90% the times were close and at 66% they actually dropped in the simulation.

I guess a way to ensure a proof in the top 10% hardest each time would be good.

Also 0.093 seconds (93 milli seconds) cannot necessarily be distinguished from a vault long (digital or land) distance away.

3 Likes

Increasing the depth from 10 to 100 would mean ten times more previous proofs are required to generate each successive proof.

This change is intended to make it much harder to cheat but not much harder to generate the initial proofs.

1 GB

Took 80 seconds (1m20s) for initial (honest) creation (compared to 22s for depth 10)

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 493 missing dependencies, 39.0 seconds to recreate (compared to 236, 4.1s for depth 10)

90% of proofs had 386 or less missing dependencies, 30.9 seconds to recreate (compared to 66, 1.15s for depth 10)

66% of proofs had 156 or less missing dependencies, 12.2 seconds to recreate (compared to 4, 0.12s for depth 10)

10 GB

Took 521 seconds (8m41s) for initial (honest) creation (compared to 183s for depth 10)

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 4740 missing dependencies, 223 seconds to recreate (compared to 788, 13.7s for depth 10)

90% of proofs had 3645 or less missing dependencies, 171 seconds to recreate (compared to 84, 1.48s for depth 10)

66% of proofs had 1450 or less missing dependencies, 68 seconds to recreate (compared to 5, 0.086s for depth 10)

100 GB

Took 6895 seconds (1h54m55s) for initial (honest) creation (compared to 2452s for depth 10)

Discard 50% of lowest dependency proofs.

Hardest proof to recreate: 45791 missing dependencies, 3910 seconds to recreate (compared to 1802, 35.5s for depth 10)

90% of proofs had 34215 or less missing dependencies, 2470 seconds to recreate (compared to 78, 1.5s for depth 10)

66% of proofs had 13052 or less missing dependencies, 585 seconds to recreate (compared to 5, 0.093s for depth 10)

What does it mean?

Cheating has become much harder but initial proof creation only a little bit harder. That’s a nice characteristic.

Initial generation time is a bit slower, taking about 3x longer when depth was increased by 10x. This is a reasonably good result, since depth is only meant to control how many prior proofs must be retained to avoid cheaters, not how long it takes to initially create proofs. Ideally increasing depth shouldn’t add any time to the initial generation, but this is real life and increasing security comes at some cost.

Cheating becomes much harder, which is the intended purpose of increasing depth. The difficulty to recalculate missing proofs becomes exponentially harder as the total number of proofs increases. A 10x increase in depth created up to 1000x more work for a cheater (in this particular scenario).

In this case, the benefit of increasing depth from 10 to 100 is much greater than the cost, so overall it seems like a positive thing to do.

How should a value for depth be chosen? Increasing it more will eventually lead to almost no additional improvement to cheating. It’s a tradeoff, we want to keep depth low to avoid taking too long for initial generation but we want to keep depth high to make it very hard to cheat by discarding proofs.

What’s the point?

Zooming right out, recall why proof of storage is useful to SAFE.

Knowing how much spare space exists allows the network to manage stress better.

The proof of storage mechanism is like a sensor, providing additional knowledge to the network, which can be used to optimize the control mechanisms (such as safecoin creation, storage pricing and new vault disallow rules) and reduce stress on the network.

This prototype algorithm would be less cost (to both user and network) than the sacrificial chunks idea because it uses almost no bandwidth.

It’s good to consider whether the stress can be managed without the extra work required for proof of storage.

It’s hard to say whether the overall benefit of any proof of storage mechanism is greater than the overall cost.

The cost this particular algorithm imposes seems to be low enough that it could be considered usable for the problem (ie it’s not an impossible problem to solve).

I’m still undecided about proof of storage vs ‘let the farmers worry about it’.


Yeah the unluckiest proof is ‘extremely’ unlucky compared to merely ‘very’ unlucky worst 10%.

This is where increasing the difficulty to have more iterations of the signing operation or the hashing operation will allow the time-to-recreate-missing-proofs to be increased enough to be meaningful (along with a suitable depth value as explored in this post).

7 Likes

Even for 100GB (~2 hours) this is not too bad, considering it’ll probably take longer to age to child and start farming. Although for 1TB it might be an issue.

Yes this is very good.

12 seconds for a vault size that is unlikely to be cheated, still means that faking it is difficult to pull off.

Agreed and possibly a depth of 25 or 50 might be enough in the initial stages while CPU speeds are at current levels (or slightly faster) but might become an issue when we start seeing typical fast desktops 10x speed of what they are today.

Still the cost of cheating with a super computer kinda does not make sense, especially if each vault is different. As in having one super computer doing a “service” solving proofs for thousands/millions of cheating machines.

I still am of the opinion that the occasional cheat that succeeds will not massively upset the calculations. Assuming the massive cheats (100s TB vault) cannot get past the check due to the complexity. The spare space & price calculations will have to assume there is some inaccuracies because a good vault not cheating may have a partial drive failure where a portion of the vault is inaccessible. So to not account for potential inaccuracies is bad modelling

Agreed

And thank you @mav for the additional simulations/calculations. It shows this idea has potential and my current opinion is that it will help the network to determine spare space and thus pricing.

6 Likes

Performance

The range of performances for this algorithm on different processors is pretty reasonable:

1087.8 seconds raspberry pi 2B v1.1, ie 1 GiB written at 7.9 Mbps
31.9 seconds i7-4500U laptop, ie 1 GiB written at 269 Mbps
25.9 seconds AWS m5.large vm, ie 1 GiB written at 332 Mbps
18.2 seconds i7-7700 desktop, ie 1 GiB written at 472 Mbps

The Mbps equivalent is shown so it can be compared with using sacrificial chunks as a proof of storage mechanism (which would require bandwidth that’s also being consumed by initial primary chunk download).

With regard to the very low performance of the raspberry pi, I’d say if it has this much trouble doing the proofs it would have a lot of trouble simply existing on the network, since the proofs use the same common operations as normal network operations (sha3-256 hashing and ed25519 signing). A certain degree of homogeneity of vaults is desirable, and the pi is quite far away from what could be considered ‘normal’ expectations of performance. It comes back to the point made in next step in safecoin algorithm design - we need to be conscious about the degree of inclusion desired on the network. Is 50x variation in performance acceptable? Maybe it is…? What about when ASICs are developed and those operations are now thousands of times more powerful than a raspberry pi? Survival of the fittest can easily turn into survival of the richest. Having different vaults with differing capabilities (eg routing-only vaults) should help in this respect. I don’t have an answer here, just dumping my conflicted thoughts on the page.


Scaling characteristics

All parameters scale linearly, which is not surprising since that’s how the algorithm was designed.

This is useful since it makes it easy to predict the consequence of adjusting the parameters. It’s also easy to communicate to potential vaults whether their hardware is viable for completing the required proofs, since the outcome is a simple function of the algorithm parameters.

For example, if my pc could prove 1 GB storage in 1m, I know it could prove 200 GB in 200m.

If a vault takes 10m to fill 1 TB, the network knows that difficulty could be doubled and the vault would now take 20m to fill 1 TB.

The scaling characteristics are charted below, using the following parameters (one variable is tested per chart as shown on each x axis):

Depth of 10
Hash difficulty of 1
Signing difficulty of 1
Generating 1 GiB of proofs
i7-7700 desktop cpu

9 Likes

I didn’t check it but, intuitively, if the depended blocks are drawn from a Pareto (or maybe exponential) distribution, with more blocks from the recent past and fewer from among the older ones, that may help with this and even out the access probabilities.

As these distributions are not bounded on the positive side, the block numbers that looked back beyond the first one could be just discarded and we’d continue taking samples until we found enough good ones.