Exploration of a live network economy

Could you please compare your model with some existing deal for lifetime storage as:
today 1TB $34
5years later 1TB $13 (expected)
10years later 1TB $5 (expected)
(price drop of 1TB in a past about 2,6x per 5 years)
You can set price of SafeCoin for day one to match with this deal and check what price of SafeCoin should be to match with expected price offer in 5 and 10 years.

Edit: the price of offer does not matter, it is only sample. Other not discount prices are about 10 times more expensive.

1 Like

Hi @Mendrit, sorry about the late reply.

Iā€™m not sure I understand exactly what youā€™d like to see. Do you think you could perhaps rephrase it a bit?

One possible optimization (simplification) of the model is to try exclude data sizes.
If we have an average of x amount of storage added per time unit, and an average of y amount of data traffic per user and time unit - then we can simplify the problem by assuming that, as data storage capacity grows (i.e. storage becomes cheaper in $), traffic also grows. I havenā€™t looked up the proportions (itā€™s probably not many google-searches away), but I think thatā€™s what weā€™ve seen so far. Increased storage capacity gives increased traffic, and so we donā€™t need to deal in absolute numbers (if weā€™re simplifying a lot) - just use a base unit and assume that any growth is reflected equally in capacity as well as traffic.

Potential problems here are that we are pegged to absolute data size by the PUT definition being the cost of storing data of size less than or equal to 1MB, and another one is bandwidth, since it is growing in capacity / price at a rate distinct from storage.

I actually havenā€™t delved into these aspects yet, so a lot of unfinished thinking here.

Additional iterations

First, some nomenclature:

What is being explored here is more than just the farming algorithm, or safecoin, it is the entire SAFENetwork economy, as it would be with a single farmable resource (storage).

Then a big fat disclaimer:

These simulations basically says that given all the chosen parameters (and no others), assumptions and simplifications, this is the outcome. So far, weā€™re not really talking about real outcomes. This is definitely just feeling out the domain, trying out various things and see what gives.

The most difficult thing in all of this, is to create something reminiscent of a realistic market model.
The SAFENetwork economy cannot be tested in isolation, it depends entirely on the reality of the real life market around it.

So what weā€™ve done here so far, is basically to just imagine a very specific, and absurdly simplified market behavior, and see how the farming algorithm would fare with that particular behavior.
That is not entirely pointless, since it does generate some data that will inform us to some extent on the viability of the tested algorithms. One would however have to be utterly aware of the tiny spectrum revealed by it, as to not make oneself a disservice by reading in too much in that data. The informational value is low, and it is tightly inter-tangled with noise and misleading information.

Store cost

Before delving deeper into improved market modeling, a few additional iterations were done. Among them a simulation of 100 years, just for the sake of observing some extremes.
An important observation was that the store cost of version 2, an implementation of Senecas proposal from back in 2015, exhibited the same issue as that of version 1 (before it had compensating measures implemented).

We assume that we, in a state of balance of store cost and farming reward, have a 98:2 ratio of reads to writes, as per the social media distribution of consumers vs contributors. (The actual number is probably different, but a distinct preponderance of reads is likely, which is the important part for the principle.) This gives that increasing store cost as to balance unfarmed coins supply, is a blunt instrument, as it would be necessary to have a store cost much higher than farming reward, to compensate for the issuance done by the reads. Assuming that farming rewards is at a balance with market valuation, and store cost is initially discounted to encourage uploads and enforce a net farming, allowing store cost to grow much higher than farming reward, risks thwarting uploads all together, which would be completely counter productive to the goal of increasing recycling rate as to balance the supply of unfarmed coins. The net farming would instead increase, adding even more to the problem, and the system would spiral out of balance - perhaps into an unrecoverable state.
It was also observed that the adjustment seemed likely to overshoot and result in a strong oscillation of store cost around the desired value. Itā€™s possible this would be a place where to implement adjustment using the integral and derivative of the error - as per PID controller method.

Improving market model

As mentioned previously it would be desired to look at previous work done in a similar domain. I did look up some papers in simulation of stock markets with interesting methods that can be put to use here. Now, currently trying to apply those onto a cryptocurrency market would not be a good fit. However, if we are looking at a global adoption as hoped for, with tens or hundreds of millions of agents or more, then the market behavior will change. It would probably look more like stock market than cryptocurrency at that point.

To increase the informational value of simulations done with the SAFENetwork economy models, it seems we must put a lot more effort into the market model.


I will be spending time on some other things related to SAFENetwork for a while now, so will pause this exploration here. Any ideas for methods to research, papers to read and what not, please suggest it here and we can try work with it when Iā€™m done with the other stuff.

6 Likes

The real price check is only one point checking if SafeCoin price will increse or not in specific model. And with diferrent imputs it should show how to set formula to reflect SafeNetwork needs. (They are already specified in original WhitePapers).

We can predict that price of HW will drop, or bandwidth price so the real price of 1 chunk should follow the price movement in oposite way if there is still about same demand. If not SafeNetwork would become expensive option to use.

So if StoreCost drop from 60000nanosafes to 50000nanosafes in ten years it will be only 17% cheaper (in SafeCoins) than 10 years ago ? And if $ price of SafeCoin grows more than 15%, it will be actually more expensive?

1 Like

Something that might be interesting to model (or might simply be irrelevant) is the ā€˜bitcoin replacementā€™ mode, where safe network is seen mostly as a token economy and not as a data service.

Maybe this idea can be expressed as ā€œclient activity is 1% of PUT and GET, and farmers do 99% of all PUT and all GETā€. So farmers have some usd cost to do a GET (mostly some tiny bandwidth cost, some tiny cpu cost for signing, but multiplied billions of times) and they have some safecoin cost to do a PUT (storecost), and they have have some usd cost to respond to these GETs (storage, bandwidth, consensus work etc). When thereā€™s some imbalance farmers can arbitrage by doing more/less GET or more/less PUT or more/less selling/buying for usd or safecoin. Itā€™s a direct replacement of bitcoinmining=repeat(hash) with safecoinfarming=repeat(bandwidth, storage, cpu). And then somewhere in the swirling mess of farming activity thereā€™s the occasional client upload or download, just like how in the swirling mess of bitcoin hashing thereā€™s some client transactions.

Itā€™s a really hard model to make, probably very sensitive to initial state and cost assumptions, but I find it interesting from a marketing / communications perspective since it really hammers home how proof of resource replaces proof of work (and presumably the lower costs per unit of client value).

Too crazy perhaps?! Has a lot of parallels with the topic ā€˜gaming the rewardsā€™, but at some point the line between spam and not spam becomes very fuzzy. I think a lot of farmers will push hard on that line and things could get weird.

5 Likes

The other day, just as I was about to pause this exploration for a while, I got some inspiration by something that @19eddyjohn75 wrote here.

Two things actually - one of them a big change to farming reward.
Itā€™s funny, the other of them is also touched on by @mav there now:

When thinking about the bigger idea that the post spurred (about dramatically changing the way farming reward works), one thing I realized is that read-write ratio basically says how many GETs there are per chunk uploaded. At next instance it might be something else, but in a large network it should not fluctuate heavily. So at that moment, 1 PUT should cost [read-ratio] x farming reward, as to cover all GETs prognosticated for it.

For example, if there are 98 % reads, the store cost should be 50x farming reward (well, the ratio is 98:2, i.e. 49:1 and that means 49x R infused in every C.).
This I currently think is the most natural method for knowing what store cost should be.

It can be weighted with inverse proportionality to unfarmed coins as well, as to enforce a gradual flattening of the decline in unfarmed supply curve, approaching a zero derivative. (I think now btw, that balance of unfarmed should be closer to 10 % than 50 %). This I have done in simulations just now.

So, to the main thing that @19eddyjohn75 's post spurred in my thinking:

It seems to me that algorithmically determining farming reward, based on parameters available within the network (where storage scarcity would seemingly be the most important factor in all proposals), cannot follow the real fiat value of safecoin as agilely as we are used that electronically interconnected markets do. We have an inertia in the form of joining and leaving of nodes, which additionally is a dampening and fuzzying indirection in the price discovery, as it tries to express all its value through its value in terms of storage.

Just as I was about to pause the economy simulations, I got an idea for doing this radically different. Iā€™m still working on the details, but so far Iā€™ve done this little write up. Itā€™s just started and I had planned to write a lot more (and refine unfinished thinking) before posting, but Iā€™m about to do some other stuff now, so best to just put it out there so others can start think about it as well :slight_smile:

Iā€™ll start with a nice chart, from the latest simulation of 53 years. It took more than 24 hours to finish.
It employs a model for vault operators bidding on farming reward price, with some weight added for storage and coin scarcity. Store cost is calculated based on the read-write ratio, as per the above description, additionally weighted by coin scarcity.


End size of network: 7million vaults and 50+million clients.


A new take on farming rewards

Economy aims

When designing an economy, we need to define desired properties of it.
In the work with the economy models, we have so far discerned these desired properties:

  • Supply of storage should allow for a sudden large drop of vault count, and thus a margin of about 50 % is desired.
  • Supply of unfarmed coins should allow for the network to adjust costs and payouts, and thus a margin of about 10-20 % is desired.
  • The balance of storage supply should be reached as soon as possible.
  • The balance of the unfarmed coins supply, should be reached in a timely manner, but not too fast. Not sooner than 5 years, and no longer than 20 years.
  • Store cost should reflect the value of the storage.
  • Farming reward should reflect the value of serving access to data.
  • The economy should be able to incentivise users to provide more storage when needed.
  • The economy should be able to incentivise users to upload data when thereā€™s plenty of storage available.
  • The economy should be able to incentivise rapid growth as to secure the network
  • The economy should be able to allow users to quickly act upon the incentives, thus swiftly reaching the desired outcome.
  • The economy should be as simple as possible, and not require any special knowledge by users, for normal usage.
  • The economy should not be easily gameable.

Vault pricing

The most important part is not to bring the large scale operators out of the game, the most important part is to keep the small scale operators in the game.

Because we still need the large scale operators, or at least it has not been shown that they are not needed, and so we cannot assume they are not.

Large scale operators might be able to provide the network with more bandwith and speed, and they should be rewarded for stabilising the network with those resources.
However, we also want to emphasize the incentives given to decentralisation, and that is done by allowing

  • Vaults to set the price of reward
  • Equal share of payment to the lowest price offer as well as fastest responder.

An additional benefit of this, is that we have internalised and reclaimed the market valuation of storage. It is now done directly by each and every individual vault.
The problem of how to scale safecoin reward in relation to its market valuation, has by this been overcome. There is no need for the network to have a predefined price algorithm that both take into account the potentially very large ranges of fiat valuation of safecoin, as well as the inertia in allowing new vaults in.

Price adaptability

A coin scarcity component will influence store cost, as to give a discount when there is still a lot of unfarmed coins. Gradually, as the portion of unfarmed decreases, the store cost will increase and first approach farming reward, and eventually surpass it. Previously, it was thought that due to expectancy of read write ratio being very high, it would not likely be the store cost itself that prevented depletion of network coins. Instead, when the idea emerged of the relation of reads to writes being significant to store cost, it was thought that the market forces would be doing this as scarcity grows and fiat valuation of safecoin grows. This would allow vault operators to lower the safecoin price while still running at a profit. The effect of this is that farming rewards would be smaller and smaller in safecoin terms, as the scarcity and valuation increases, and supposedly the unfarmed supply would then be farmed in smaller and smaller chunks, thus never completely running out.
(However, it is possible to add the coin scarcity component also to R, as to reward more when much is available, and less when when less is available.)

Reads to writes, the key to actual store cost

The proportion of reads to writes, is essentially the number of times any given data will be accessed. If read-write ratio is n:1, it means that every uploaded chunk is accessed in average n times.
For that reason, if every access is paid with R from the network, every piece of stored data should have the price n * R, where n is the number of writes per read at the time of upload.

A read write ratio, basically says how many times any given piece of data is expected to be accessed during its lifetime ā€“ as of current situation. It is then natural, that for store cost C to be properly set relative to farming reward R, it must set to the expected number of accesses for a piece of data, times the reward for the access.
By doing this, we enable balancing the supply of unfarmed coins, around some value. This is possible because when we weight the store cost according to read-write ratio, we ensure that payment from, and recycling to the network, happens at the same rate. All that is needed is to keep an approximately correct value of reads and writes done. In a section, it is perfectly possible to total all GET and PUT requests, as to have the read-write ratio of the specific section. When some metrics is shared in the BLS key update messages, we can even get an average from our neighbors, and by that we are very close to a network wide value of read write ratio.

Where this balance ends up, is a result of the specific implementation. It can be tweaked as to roughly sit around some desired value, such as 10 % or 50 %.

Store cost

[Coming up]

Calculating R

G = group size = 8

R

  • 20 % to fastest responder
  • 20 % to lowest price
  • 60 % divided among the rest (6 out of the 8), according to some algo

Setting price of GET:

1: p = Lowest price among the vaults in the group.
2. a = Median of all neighbour sections price (which are received in neighbor key update messages).
3. f = Percent filled
4. u = unfarmed coins
Like so:

R = 2 * u * f * Avg(p, a)

Tiebreaker among multiple vaults with same lowest price:

  • Reward the fastest responder among them.

Example:

Section has 145 vaults.
Fastest vault has price of 200k nanosafes per GET.
Cheapest 3 vaults has price of 135k nanosafes per GET.

At GET, the price is set to 135k nanosafes.

  • 0.2 * 135 = 27k goes to the fastest vault.
  • 0.2 * 135 = 27k goes to the fastest of the 3 cheapest vaults.
  • 0.6 * 135 = 81k is divided among the rest in the group according to ā€¦. algo.
    If split even, that means the remaining 6 (out of 8) gets 81k / 6 = 13.5k nanosafes each.

Data is uploaded to the section.
Last GET was rewarded at R=135k nanosafes.
Store cost C is then a proportion of R, determined by coin scarcity in the section.
If unfarmed coins u is 70 %, then cost multiplier is:

m = 2 * (1 ā€“ u)^2

and store cost is:

C = m * R

New vaults

A new vault joining a section, will automatically set its R to the median R of the section.
Using the lowest bid is not good, because then you immediately remove the farming advantage of price pressuring vaults, which would mean that they donā€™t profit from lowering their price, as they immediately and constantly get competition from new vaults joining, and result is probably that they just get lower reward than before, and for that reason they have nothing to win on pressuring the price downwards. So, best would be to let new vaults default to the median, as to get them in at an OK opportunity for rewards, but not pulling the rug from under the price pressuring vaults. This way, the incentive to lower the price is kept, as they are by that more likely to receive the bigger part of the reward. Additionally, new vaults will also have an OK chance of being the cheapest vault for some of the data it holds, without influencing the price in any way by mere joining. They simply adapt to the current pricing in the section. Any price movers among the vaults, would influence by employing their price setting algorithms. This way, we donā€™t disincentivise members of the section to allow new vaults in ā€“ which would be the case if that statistically lowered their rewards.

This means that no action is required by new vault operators. However, advanced users can employ various strategies, anything from manual adjustment, to setting rules ( as in for example - naĆÆvely - R = cheapest ā€“ 1), feeding external sources into some analysis and outputting into the vault input etc.

Every time a vault responds to a GET, it includes its asked price.
The price used for reward of a GET is however always the from the most recent established GET, as to not allow a single vault to stall the GET request.

Example:

(GET 0 is the first GET of a new section)

GET 0:

Vault A ; response time: 20ms, price: 43k
Vault B ; response time: 25ms, price: 65k
Vault C ; response time: 12ms, price: 34k
Vault D ; response time: 155ms, price: 17k
Vault E-H: ā€¦.
Reward: Most recent GET from parent before split (or [init reward] if this is first section in network). Say, for example 22k
Next reward: 17k
Fastest vault: C
Cheapest vault: D
R_c = 0.2 * 22 = 4.4k nanosafes
R_d = 0.2 * 22 = 4.4k nanosafes
R_ab_eh = 0.6 * 22 / 6 = 2.2k nanosafes

GET 1:

Vault A ; response time: 24ms, price: 45k
Vault B ; response time: 22ms, price: 63k
Vault C ; response time: 11ms, price: 37k
Vault D ; response time: 135ms, price: 15k
Vault E-H: ā€¦.
Reward: 17k
Next reward: 15k
Fastest vault: C
Cheapest vault: D
R_c = 0.2 * 17 = 3.4k nanosafes
R_d = 0.2 * 17 = 3.4k nanosafes
R_ab_eh = 0.6 * 17 / 6 = 1.7k nanosafes

Vault operator manual

When setting the price of GETs, the operator doesnā€™t really have a clear correlation between the number set, and the resulting reward.
Letā€™s say the operator has the lowest offer, then it will win every GET, and be rewarded with 20 % of the R calculated for it (assuming it is not also the fastest responder). As R is dependent on coin and storage scarcity this could be wildly different numbers in different times. An operator offering storage for 1000 nanos per GET would receive 200 nanos if 100 % storage was filled and 100 % coins issued. If on the other hand 50 % storage filled and 50 % coins issued, the operator would receive 50 nanos. In other words, it is likely that the number entered in the settings, is quite different from the resulting reward, which makes this configuration less intuitive.

The number to be entered - to the operator ā€“ is practically just some random number.
However, as the vault joins the section, it will have guidance on what number is reasonable. The operator then only has to worry about adjusting in relation to that. Such as, set price to x % of median section price at time T. The x % could for example be the price movements on a chosen exchange since time T.

Game theory

Winnerā€™s curse

The risk of Winnerā€™s curse is not certain, but it could be argued that vault operators will try to outbid eachother by repeatedly lowering the price, beyond reasonable valuation, to the detriment of all.

Is there a Nash equillibrium?

The low cost home operators might have incentive to lower the price to virtually nothing, as to quickly squeeze out the large scale operators, who by that run at a loss. After having squeezed them out, they can increase their bid again, as to aim at winning both cheapest price and fastest response.

A possible prevention of this would be to set reward R to be Avg(cheapest price, fastest responder price). However, any player knowing that they are the fastest responder, can then set their price unreasonably high, as to dramatically rise the reward.

Second price auction could also prevent the squeeze out, since there is a higher chance that the second price is high enough for the large scale operators to still gain. This would make any further price dumping beyond just below second price, meaningless for a home operator. Additionally, there would be no way for the fastest responder to artificially bump the reward by bidding a lot higher than the others.

Another prevention strategy would be to set the reward to the median of the entire section. The lowest bidder still wins their higher share, as well as the fastest responder. But the share comes from the median price of the section. This way, there is little room for individual operators to influence the reward by setting absurdly high prices. In the same way, the opposite - dumping the rewards by setting absurdly low prices - is also mitigated, assuming that a majority is distributed around a fairly reasonable price.

A desired property of the vault pricing system, is that it is as simple as possible, not requiring action from the average user, and not allowing for being gamed.

Cartels

Is it possible that large groups of operators would form, that coordinate their price bids as to manipulate the market? Can it be prevented somehow?

14 Likes

A fascinating post, lots to take in since itā€™s quite different to prior ideas. My main takeaways / summary / highlights are:

The most important part is not to bring the large scale operators out of the game, the most important part is to keep the small scale operators in the game.

we have internalised and reclaimed the market valuation of storage. It is now done directly by each and every individual vault.

There is no need for the network to have a predefined price algorithm that both take into account the potentially very large ranges of fiat valuation of safecoin, as well as the inertia in allowing new vaults in.

Every time a vault responds to a GET, it includes its asked price [ie expected amount of reward].

A general comment, the use of ā€˜priceā€™ vs ā€˜rewardā€™ is a little confusing to me, maybe Iā€™m just not used to it. Maybe ā€˜expected rewardā€™ is a good substitute for ā€˜priceā€™ā€¦? To my mind the use of word ā€˜priceā€™ mixes the idea of storecost and reward too much.

One thing I donā€™t understand is for ā€œ20 % to fastest responderā€ - how is ā€˜fastestā€™ measured? Fastest to respond to the neighbour (so the neighbour decides) or fastest within the section (is there a ā€˜leaderā€™ elected that gets to decide), or is it a statistical measure of overall latency? Maybe the pricing could be not done for every get but be a periodic poll hosted by a random elder so ā€˜fastestā€™ becomes ā€˜fastest to respond to the pollā€™ā€¦ I dunno, just feeling that fastest implies some common baseline of measurement when I donā€™t see how that can exist in a decentralized way. Maybe it canā€¦?

ā€œit could be argued that vault operators will try to outbid eachother by repeatedly lowering the price, beyond reasonable valuation, to the detriment of all.ā€
It could also be argued that vault operators will try to raise the price beyond reasonable valuation. Iā€™m not sure what the dynamics are here. My gut says to look closer into the relation of 20% reward to the cheapest vs the ā€˜defaultā€™ reward from divide(60%). I wonder if itā€™s possible to invert the expected behaviour or what happens in the extremities. For some reason I think back to the monero dynamic block size algorithm where being too far away from average causes punishment. Maybe allow vaults to set an extremely high price relative to other vaults, but if they do theyā€™re punished for it. In the case of monero the punishment for making bigger blocks becomes acceptable because of the possibility for future rewards to be higher due to those bigger blocks which offsets the punishment.

Would you consider having some weight for node age or being an elder?

ā€œIs it possible that large groups of operators would form, that coordinate their price bids as to manipulate the market? Can it be prevented somehow?ā€
I think yes this will happen, and your idea of using median pricing is a pretty good one for helping reduce the effect, especially if the vanilla default vault uses a sane price. Punishments for bidding too far from expected ranges might help.

Still absorbing it all but nice to read these innovative ideas.

8 Likes

Good point. I struggled a bit with this, but left it for later consideration.

Letā€™s se what we can do about it. How about this nomenclature?

Store cost

  • the cost of uploading a data chunk, paid by a client to the network, a prognosis of the data chunkā€™s lifetime number of GETs.

Bid

  • the bid for accepted price for a GET, that a vault operator uses to compete for the Reward.

Reward

  • the actual price of a GET, that the network then pays the vault operator.

The use of the word ā€œpriceā€ is here from the perspective of the network (it has to pay the operator), the reward is from the perspective of the operator.

I thought it nice to split it up in three distinct names, as to clearly separate these three things; the cost, the bid and the reward.

What do you think?


This is from what I have perceived to be the supposed algo so far: The majority goes to the vault first responding to the GET. Maybe this was changed in RFC57? Didnā€™ look it up. Additionally, maybe there was never a concrete suggestion for how to implement it.
But I imagined that it would be part of verifying that a vault serves a GET, and done by the same means as that is done (Elders + parsec? I havenā€™t looked it up). The response time is compared between the members of the group. This needs to be cleared up, Iā€™m sure someone whoā€™s into the implementation of the Elder management of GETs can shed some light on this.

This could also be good from a performance perspective. With large volumes, we may have (seems highly likely) a good enough system when increasing granularity, and not sampling every GET. So, would need to be figured out how large of a tolerance to deviation can be accepted, and at what level of granularity that is expected to be achieved.

Yes, yes absolutely. Both directions for sure.

Yes, definitely, this was a simple first suggestion. It results in the two winners (number of winners could of course be made more rich/complex in tiebreaking situations if desired) in a group (cheapest & fastest) to receive double reward compared to all others.

The others need not be active in any way, they are guaranteed their share of the reward - simply by hosting data and following the rules. It seemed like a fair level of motivation for the cost of improving the network in these two ways:

  1. Providing [potentially much faster/more accurate] decentralized price discovery,
  2. Speeding up delivery

So, double or some other factor, that can be discussed. Maybe even dynamic by some measure? (In that case, important to figure out if the added complexity of dynamic adds enough value to be worth it.)

I am not sure I fully understand what you mean here, it sounds interesting so if you are able to expand a bit on it and feel it worth while it would be nice to hear more about it.

I think a simple way - although maybe not perfect (?) and maybe not the end solution used - is the median of section (or the apprx. network median, as by neighbor gossip). As long as the majority of users are distributed around some fairly reasonable value, extreme bidding will have no influence at all. Introducing punishment and so on would be a less preferred way I think, because of the added complexity. But maybe it ends up being a good solution as well.

This is part of RFC57 IIRC, and I left it out by purpose as to not complicate it too much in the beginning, looking at this specific dynamics in isolation to begin with. But I think the foundations of the rationale for this suggestion are sound. They are doing extra work, the age is a fundamental factor in the system security algorithms and so on, and there is a reason why to reward it.

Yep, I think itā€™s likely as well, without good measurements. And yup, both of those might be good. So one thing to look for is if these in isolation or combination are good enough, or if something else is needed in addition or can replace these.

With the RFC57 all GETs earnings are divided by the Vaults of the section according to the age.

2 Likes

Ah yes, I remembered it was something like that, didnā€™t know if speed to deliver had been excluded altogether though.

So, essentially, the parameters chosen determines what we incentivise. Age is probably quite good to include. Now above I explore the value of speed and market valuation as well.

Edit: Simplest thing would probably be to divide the remaining 60% among the 6 non-winners, according to age.
That makes speed and price discovery immediately profitable services to provide to the network (still at least 2 times more than any non-winner, regardless of its age), while also including the incentive for aging reliably on the network.

It seems to me that would work towards this aim:

Again though, the exact distribution is in the details, maybe not double, maybe dynamic, etc. etc.

1 Like

Vault bidding implications and preventive measures

% Beneficiary
22 lowest bidder (L-share)
22 fastest (F-share)
22 oldest (O-share)
15 second-oldest (O2-share)
10 third-oldest (O3-share)
4 fourth-ā€¦
3 fifth-ā€¦
2 sixth-ā€¦

100 %

A single vault can super-win by getting any combination of the LFO-shares; LO, FO, LF, LFO.
A single vault can get 66% by being all of the top three, i.e. LFO.
So, when you are FO (fastest and oldest) how low will/can you bid (to become LFO) before it is less profitable than just staying a FO?

You probably wonā€™t be having to think about that unless you are an advanced user.

If median bid (and thus reward, not considering weights) is 100 nanos, you have 44 nanos when you are FO or LO.

Group has bids

320
150 ā† you
110
100
100
98
87
40

Median is 100, and your bid is 150. Now you change your bid to 35 as to also be lowest.

320
110
100
100
98
87
40
35 ā† you

New median is 99.

You now get 66 % of 99, i.e. 65 nanos.
The previous lowest bidder is the most likely competitor, as it was just bereft of its higher income. However, it cannot change median bid by under-bidding you.
Actually, only bidder 98 and above, can now change median bid by going below your bid.
So, if you started a race to the bottom with the previous lowest (40), then you are not affecting reward more than after that first new lowest bid.

In a situation like that, best game is to go lowest possible bid immediately, i.e. 1 nanosafe.
No-one can go below it, and anyone else also going to 1 nanosafe, would have to either tiebreak or split the share with you (depending on rules).

Given that this play would be beneficial, there would probably always be a 1nano L-bidder in every group. Less knowledgeable players could try partake in the L-share by also going 1nano, and soon enough there could be a mass dump.

For this reason, it probably would be wise to not allow bids less than 1 below current lowest bid.
Any number below that, would automatically be interpreted as lowest-1, without any further consequences (too expensive to punish it, when it doesnā€™t impact anything and can simply be ignored).

Would there still be any risks of a race to the bottom, by a set of vaults large enough to impact median bid?

So if you expect that to bid 100 does not make sense when orders have 1, you can already work with 1 for all and make race only for fastest and oldest.

Than have 25, 25, 19, 13, 7, 6, 5%

It does make sense, because you can raise median, and thus your income.


There is one essential aspect of the bidding that was left out of the previous examples:

A vault will most likely not bid for individual groups, that is way too work intensive to be feasible. Every chunk has a group and youā€™ll probably have millions of chunks.

So, there is simply a single configurable value in the vault settings. But I chose to call it ā€˜bidā€™ because it should be runtime configurable for those who wish to do that, and it effectively is a bid.
The bid is used for all your chunks, and so a vault will have the same bid in all its groups.

That makes the game more macro than micro.

For the average user, itā€™s not something theyā€™d generally pay attention to or even need to know it exists. However, I see how software would help home vault operators increase their earnings by being able to outbid large scale operators.
And this is how it would increase the portion of small scale operators, improving the decentralisation of the network.

Advanced users can use it to increase their earnings by providing the decentralised price discovery and speed.

If there are 1, 1, 1, 1, 100, 100, 100, 1000
Median is 50,5

If one more user try to set 1, than median is 1 and uper 3/7 does not profit from setting 100 or more.
While it is big difference for everyones to have 22% from 100, 50 or 1. But who will set it back to 100 and loose opurtinity to get 22% again?
It will works only if default value is 100 and only 1/2 or less vaults will try to set 1.

For the variations of rules I have set forth, there is nothing to gain in making the move you describe.

The vaults that have bid higher than 1, are currently gaining minimum 4-15% of 51 = 2 to 8, and maximum 44% of 51 = 22.

  1. In case of L-share being split between all holding the lowest bid, we would get 22% of the new median (1) split on 5, so 0.04.
  2. In case of Tiebreaker fastest wins, it would get 0 or maybe 0.22 if faster than the other 4. If this vault is the fastest, then it gets 0.22.

So, what ever rule there is, there would be no way to gain in doing this move. At worst they get 0, at most an addional 0.22, which if they have LFO, could set them at maximum 0.66, and this is far less than the 2-22 they had before.

If we donā€™t think that is enough hinder, then preventive measures can be taken, like punishments suggested by mav for example.