This post is about a possible safecoin algorithm that could be a next step in design following RFC-0004 Farm Attempt and RFC-0012 Safecoin Implementation (both from 2015).
The main new idea it introduces is ‘target parameters’.
Summary
Putting the end at the start, here are the targets I propose:
100GB target vault size, doubling every 4 years (but gradually, not stepwise like bitcoin halvings)
14 days between relocations (on average)
50% of coins issued (still 2^32 max, but always moving towards 2^31)
Targets In The Real World
There are some fundamental ceilings on the ability of vaults to run on the SAFE network.
For example a very large average vault size of 1 PB is not a beneficial size because right now it would take a specialised computer to run a vault that large, and extremely sophisticated networking to initialise and relocate vaults.
However vaults that are extremely small are also not beneficial since this decreases the performance due to increased overheads.
Likewise issuing safecoins at a rate that’s extremely rapid or extremely slow is not helpful to the growth of the network.
And making it very expensive or very cheap to store data is going to cause adverse side-effects on the sustainability of the network.
This post explores some realities of growth that may feed into the design for the incentive structure of safecoin.
Background and Assumptions
This topic requires the consideration of a value system, as in, what do we value and care about? I don’t believe it’s possible to create an unbiased network - any design will reflect (intentionally or unintentionally) some sort of value system.
-
Who should be able to run a vault?
- 90% of internet users?
- 50% of the whole world population?
- 80% of all governments and companies?
- The top 5 chip manufacturers?
-
What commitment should be considered the minimum for viability?
- 50% of the spare space on a consumer grade laptop and internet connection running 8 hours overnight?
- 90% of the resources of a commercial datacenter?
- An internet connection that’s in the top 30% fastest in the world?
- A single high-end desktop PC?
- A mobile phone?
-
What is a satisfactory client experience on the network?
- Download a 4K 2h movie within 1h?
- Upload all my photos for $50?
-
What is the likely future growth rate?
- Of supply of bandwidth, storage etc? eg will Moore’s Law continue, how rapidly will developing nations improve their infrastructure, how will the technology inequality gap grow or shrink over time…
- Of demand to store data?
- Of demand to retrieve data?
- Of demand for computation rather than storage?
- Of improvements to the security and performance of cryptographic algorithms?
- Of participation in the network as a vault and / or a client?
-
How are changes to network parameters managed?
- Is it managed by setting a predictable growth rate (eg bitcoin supply) and fixed targets (eg bitcoin block time)?
- Is it managed by voting (eg monero dynamic block size)?
-
What security is acceptable?
- What is punishable and how harsh should the punishment be?
- How do we compare the desirability of geographical distribution vs performance due to latency?
- What error rate is acceptable, for chunks, for vaults and for the network as a whole?
- What are the various aspects of centralization and what is an acceptable degree of centralization?
- What degree of confidence should we have in various aspects of the network?
There is not necessarily any one right answer to this. It depends what we value and which aspects best express those values. There are always trade-offs. However some solutions express our values better than others. This thread tries to eliminate some of the obviously ‘wrong’ solutions that are outside the scope of current physical constraints. From within those constraints we can try to shape a useful network incentive structure and debate what we value and how to create it.
In this post the main assumption used is a 50% inclusion rate of normal internet users, but that’s just to work the numbers and is totally open to debate and change.
Growth Items
These aspects of vaults are expected to improve over time and the network should be able to adapt to these changes.
- Storage
- Bandwidth
- Computation
- Algorithmic performance (ie software improvements)
Storage
Affects: Maximum vault size
I’m confident that neglecting to set a target on vault size will lead to very large vault sizes, centralization of farming operations, and most importantly would lead to unintentional exclusion of participants. Allowing this to float freely would be dangerous to the network and it would eventually arrive at an unstable point where it would no longer automatically correct itself and lead to unstoppable centralization.
Currently the maximum ‘normal’ desktop computer storage is 240 TB (10 TB drives × 24 SATA ports).
Currently the average ‘normal’ computer storage is 1 TB (a median priced laptop from a retail consumer electronics shop).
Not all 1 TB would be allocated to SAFE vault storage, so conservatively let’s say 10% or 100 GB is acceptable for the user to commit to a vault. (Also the granularity means users can run 5 vaults to consume 50% of their drive if that’s what they want to do, but a 500 GB target size excludes any users that only want to use 100 GB of their drive).
So I propose a starting point for the targeted vault size of 100 GB.
We can expect storage to increase at a rate doubling every 15 months. This has been true for decades and seems reasonable to assume will continue for decades more. The network may automatically adjust the target vault size every so often to stay in line with the changing availability of technology. More discussion is needed about how this target would be updated.
Control over vault size can be achieved by loosening the restrictions for new vaults entering the network when average size is above 100 GB and tightening restrictions when the average size is below 100 GB. The exact mechanism is open for discussion, but I think there are several ways to achieve this which fit into the existing design.
I would love to hear other perspectives about the idea of targeted vault sizes.
From this premise of vault size targeting there are quite a few natural effects that arise.
Bandwidth
Affects: Churn rate
The median internet speed is currently around 7.5 Mbps (source) with the 90% point of the market being around 3 Mbps.
We can expect bandwidth to increase at a rate doubling every 48 months
Consider the previous target vault size of 100 GB. Joining or relocating a single vault would take an average user 32 hours (using a 7.5 Mbps connection to download 100 GB). Is this acceptable? It depends on the desired proportion of time the vault spends doing churn vs not doing churn.
If it takes 32h to complete a churn event and if we aim for 90% time-not-relocating it would mean relocating roughly every 320h (approx 14 days). Depending on the desired portion of time allocated to relocating we can work out a desired churn rate. (Factor in less frequent relocations due to age etc and it becomes more complex, but it’s just a matter of managing the maths, the idea remains the same.) The point is the ‘value system’ of inclusiveness underlying the relocation mechanism should be based on what is desired and practical both today and into the future based on a) desired inclusiveness b) bandwidth availability and c) desired portion of time spent relocating.
If relocation is too frequent it may unintentionally mean excluding participants due to bandwidth constraints and this could affect the growth of the network.
But considering the rate of increase of storage (fast) vs the rate of increase in bandwidth (slow) the target size of vaults cannot grow too fast or churn will take longer and longer into the future. So the target vault size discussed above should probably also factor in bandwidth and time to relocate.
How is this rate controlled? Some churn is uncontrolled, eg unexpected vault departures. Some churn is controlled, eg allowing or disallowing new vaults, punishing / evicting vaults, design of the relocation algorithm. I don’t exactly know how the control mechanism would be designed to aim specifically for 14 days but I’m sure there are ways.
For further consideration:
- What is the experience for the slowest 10% of viable participants going to be like?
- How does relocation affect the ability to earn safecoin? Would relocation be considered ‘downtime’ or not? Is the 32h forfeited time or merely turbulent time?
- How might 32h of continuous maximum bandwidth consumption affect the participation incentives and dropout rates due to inconvenience?
- How do we get an accurate idea of bandwidth availability and distribution?
- How do cascading relocations affect this calculation?
- How does the reduced frequency of relocation due to ageing affect this calculation?
- How does the rate affect the security of the network, eg sybil attacks?
- How do different vault capabilities factor in to the target relocation rate, eg archive nodes
Computation
Affects: Simultaneous vaults per machine
If the maximum ‘normal’ desktop computer storage is 240 TB and the average vault target size is 100 GB this means a power user may run up to 2400 vaults on a single machine. However, is this feasible considering their need to verify signatures and perform other computations to maintain their value to the network?
For now I’m not going to consider this. We know that ASIC can provide orders of magnitude increase in performance and efficiency and that it can happen in relatively short timeframes. But it’s worth putting here since it’s an avenue to centralization and opens the door to unintentional exclusion of participants.
Price To Store
Affects: Coin recycling
We have a desire to keep vaults at a specific size and to churn at a specific rate to ensure hard drive space limitations and bandwidth limitations remain within some inclusive range.
There are some issues with this though.
Imagine if uploads suddenly increased. This would put a strain on the network by requiring either faster churn rate (so more vaults are allowed and vault sizes can stay relatively stable), or it requires larger vault sizes (so churn rate can stay relatively stable).
In that scenario, where vaults are churning more rapidly than the target or are getting larger than the target, the price to store can be adjusted to reduce the upload rate. It would be preferable to increase supply rather than reduce demand, but the limitations on storage and bandwidth mean supply cannot increase instantly (unlike price which can increase very rapidly if needed!). So there must be some control over demand, and that is done by adjusting prices. Eventually supply catches up and prices can be lowered again.
Likewise if the upload rate slowed dramatically either there would be excess bandwidth available, or there would be excess storage space available (or both), so the price to store can be reduced to encourage more participation.
This should lead to efficient price discovery for storage based on demand, and should allow growth to happen at the most efficient rate for the desired degree of inclusion.
Storage price is set by the degree of variation from the target vault size and relocation rate.
Reward Schedule
Affects: Farming
The network uses safecoin to ensure the sustainable operation of the network by manipulating the behaviour of participants. Growing too fast is harmful due to the overhead of churn. Growing too slowly limits the usefulness of the network.
Since safecoin can be both created and destroyed, the maximum flexibility for control via safecoin is when half of all coins are issued. At this point there is the most potential to both create and destroy coins to drive future behaviours. Operating the network with very few coins to spend or very few coins to reward reduces the ability to motivate behaviour in the restricted direction.
Just as bitcoin tends toward 21M coins I propose that safecoin should tend towards 2^31 coins (retaining the maximum of 2^32). However, unlike bitcoin which increases at a predictable rate, safecoin will fluctuate around the target amount depending on the magnitude and imbalance of activity on the network throughout time. If someone needs to do a lot of uploading, there’s lots of coins around to facilitate that. If there’s a spike in participation there’s lots of coins to issue rewards and make it worthwhile. But if the network has supplied 5% or 95% of coins there’s less ability to cater to changes in supply or demand.
This mechanism is very similar to the existing idea of Farm Rate but differs in one important way. The existing method tries to measure and control spare resources, whereas this method tries to measure and control existing coins. I don’t see measuring spare resources as possible or desirable, especially considering this proposal is structured around a fixed vault size (chia network is the only network I know of that’s seriously trying to address the problem of measuring spare resources, and so far they haven’t come out with a solution although I am really closely following for when they do release something).
Some questions for discussion are:
- when there’s a deviation from 50% how much correction should be applied?
- should the return back to 50% aim to be within a certain time (eg within 1000 blocks)?
- should it be possible to always be at a certain supply, eg 60%, or should there always be corrective measures?
- how much normal variation in the supply-to-demand ratio is to be expected? What’s a once-a-month peak event look like? Once-a-year? Once-a-decade? How do we design for these?
- should the changes to rewards (supply) and recycling (demand) be directly connected to each other or be allowed to float?
- is 50% truly the ‘most effective’ portion? Or are the demand spikes greater than the supply spikes so it should be more like 70%?
- can the current supply of coins actually be known and how much error is there likely to be?
- how does the initial state of the network (15% initially allocated) work?
- which behaviours should be rewarded and which should be punished?
Unused but still interesting ideas
Voting by vaults for changes to network parameters. It’s such an interesting mechanism and has a lot of game theory to consider but I can’t see how it doesn’t lead to voters using their power to cause centralization and exclude participants.
Totally freeform floating parameters. I like this for the elegance but think it will be used to constantly and gradually push the bottom participants out.
Enforced geographical distribution. Is it possible? I think so. Is it worth the overhead? I doubt it.
Network Health Metrics (spreadsheet and discussion). These are still important but it would be nice if they didn’t have to be explicit in the algorithm, rather happen as a natural effect of the algorithm.
Is degree-of-inclusion the right basis for this? I believe so, considering decentralization is largely a question of inclusion. If decentralization is removed then the safe network is just a complex centralized solution. We could say that performance or efficiency is a better basis to design the safecoin algorithm around, but it’s inevitable that it would reward centralized solutions since they provide the best performance and efficiency. So there’s a lot of room to debate the basis, but my belief is the design should be primarily around the desired degree of inclusion.
Summary (repeated)
The targets I propose are:
100GB target vault size, doubling every 4 years (but gradually, not stepwise like bitcoin halvings)
14 days between relocations (on average)
50% of coins issued (still 2^32 max, but always moving towards 2^31)