Rewardable Cache
This is an idea that sort of combines Perpetual Auction Currency (PAC) and algorithmic approaches to the reward mechanism. It allows users to express some choice (like in PAC) but within a totally determined and fixed reward environment (as with algorithmic mechanisms).
Cache (and sacrificial chunks) is replaced by the concept of derived chunks. A derived chunk is something like hash(chunk_name) for derivation depth 1, hash(hash(chunk_name)) for derivation depth 2 etc derived as far as you like.
If the derived name is close to your name then you can (but may choose not to) cache that chunk under the derived name for possible future reward. (You can cache it even if itās not close to you but you canāt be rewarded unless itās close to you.)
The node may use that derived chunk to respond to requests (thus shortening the route). This is the same as the currently designed behaviour of cache.
But the advantage to the node is they may be rewarded for this action. For example if signature(original name + derived name + depth) crosses some difficulty threshold they may be eligible for reward.
The PAC element of this design is the reward amount is proportional to the depth. So you may get the full reward for depth 0, half the reward for depth 1, quarter for depth 2 etc.
This means nodes can choose the depth that works best for them. If they have lots of spare resource then they can cache deep, if not much then cache shallow. And every reward thatās issued also contains the depth so the network knows how much spare resource there is by taking stats on the depth of all rewards. Nodes are essentially voting with the depth. (OK this isnāt really that much like PAC but it does have some voting aspect to it, but the vote is not a competitive one).
If there are lots of nodes being rewarded with high derivation then spare space is abundant and perhaps there are less nodes allowed to join. If rewards are mostly very low derivation then maybe more nodes should be allowed to join. Perhaps a target depth of 2 for the 75th percentile can be used for the allow / disallow rules, ie aim for 75% of rewards to be from depth 0 or 1 and aim for 25% of rewards to be from depth 2 or more.
Advantages:
- Data is even more widely distributed, but not in an arbitrary way, in a cryptographically verifiable way.
- A measurement of spare space is available, and also a measure on the value of that spare space.
- Nodes can decide how much they value their spare space by how far they derive. They can also decide if they derive nothing or a little or a lot. They get to decide how much is useful and how much becomes wasteful. This can change over time and be detected by the network because the reward distribution to that node would also change.
- Invalid derivations or reward claims can be detected and punished.
Considerations:
Is on-the-fly derivation damaging? I think not but is worth considering. The intention is to derive-then-cache, not to derive-then-discard.
What info should be used in the derivation? Should it be the chunk name or the chunk data or both? Should it be unique per node (eg add the node name to the derivation) or should it be universal?
What is the deepest derivation allowed? If itās very deep then it could waste time when verifying whether the derivation is valid. Iād say probably 10 is deep enoughā¦?
Is it robust against replay attacks? How can this be done (which is a good question in general for the reward mechanism).
Because nodes get to decide whether or not to claim a reward, for high derivation depth they are deciding between a) getting a small individual reward which is good-for-me at the same time as making it easier for others to join which is bad-for-me vs b) not getting an individual reward which is bad-for-me at the same time as making it harder for others to join thus keeping my āmonopolyā position which is good-for-me. This is possibly a messy mixed double-function combining reward vs resource and reward vs exclusion-rights. Iām cautious about how closely to link the disallow rule with the decisions of existing operators (ie decision to be rewarded or not). I think in reality itās not too big a problem, but Iām being very cautious in my approach. This is probably impossible to avoid since all events are triggered ultimately by human decisions.
What quantity of safecoin to reward?
Does this sound like a useful feature? Iām keen to hear your thoughts.