Proof Of Storage prototype

Performance

The range of performances for this algorithm on different processors is pretty reasonable:

1087.8 seconds raspberry pi 2B v1.1, ie 1 GiB written at 7.9 Mbps
31.9 seconds i7-4500U laptop, ie 1 GiB written at 269 Mbps
25.9 seconds AWS m5.large vm, ie 1 GiB written at 332 Mbps
18.2 seconds i7-7700 desktop, ie 1 GiB written at 472 Mbps

The Mbps equivalent is shown so it can be compared with using sacrificial chunks as a proof of storage mechanism (which would require bandwidth that’s also being consumed by initial primary chunk download).

With regard to the very low performance of the raspberry pi, I’d say if it has this much trouble doing the proofs it would have a lot of trouble simply existing on the network, since the proofs use the same common operations as normal network operations (sha3-256 hashing and ed25519 signing). A certain degree of homogeneity of vaults is desirable, and the pi is quite far away from what could be considered ‘normal’ expectations of performance. It comes back to the point made in next step in safecoin algorithm design - we need to be conscious about the degree of inclusion desired on the network. Is 50x variation in performance acceptable? Maybe it is…? What about when ASICs are developed and those operations are now thousands of times more powerful than a raspberry pi? Survival of the fittest can easily turn into survival of the richest. Having different vaults with differing capabilities (eg routing-only vaults) should help in this respect. I don’t have an answer here, just dumping my conflicted thoughts on the page.


Scaling characteristics

All parameters scale linearly, which is not surprising since that’s how the algorithm was designed.

This is useful since it makes it easy to predict the consequence of adjusting the parameters. It’s also easy to communicate to potential vaults whether their hardware is viable for completing the required proofs, since the outcome is a simple function of the algorithm parameters.

For example, if my pc could prove 1 GB storage in 1m, I know it could prove 200 GB in 200m.

If a vault takes 10m to fill 1 TB, the network knows that difficulty could be doubled and the vault would now take 20m to fill 1 TB.

The scaling characteristics are charted below, using the following parameters (one variable is tested per chart as shown on each x axis):

Depth of 10
Hash difficulty of 1
Signing difficulty of 1
Generating 1 GiB of proofs
i7-7700 desktop cpu

9 Likes