Proposal: Temporarily Halting Emissions to Stabilize and Strengthen the Network

I see no hole in there, it’s a perfect, ultimate solution, incentivizing uploads, rewarding genuine node runners and lowering cost for uploaders while still putting tokens into circulation. I would upload so much more with this @Bux @JimCollinson

2 Likes

I see a small glitch, in the beginning you may get more ANT back than you spent on the upload :laughing:.

That should be temporary as uploads increase, network fills, storage prices increase, and leech nodes disappear, everything should balance out.

1 Like

if there is a race in the first few days, to upload more data in order to get a few more ants, so be it :smiley: isn’t that what we actually want at the end of the day?? not billions and trillions of empty/virtual nodes, but real data going to the network with real nodes!

2 Likes

Here´s an explanation of the previous post:

  1. From the the daily upload thread, it seems that around 33 000 more ETH than ANT were used for the upload

    image

    image

  2. If I´ve got this right, there´s a difference of 5-6 orders of magnutude between emissons and rewards

Do hope someone will correct me if I´m wrong, as this seems a bit problematic, imho.

4 Likes

Point 1 was established and been the subject of a few changes to the emissions system. If you remember the last one worked on was the geolocation problem causing unequal rates. Certain nodes favoured over others related to their geolocation.

And I said it less so. Response at the time and maybe more valid then was they needed the large number of nodes for testing. I doubt this is as valid anymore. The issues are more along the lines of those fools killing the goose laying those “golden emission tokens” since a failed network means they earn no more.

@rusty.spork @Mightyfool my idea is 2 pronged and not necessary as great as I might think it is

  1. All nodes returning a valid quote get some emissions. For instance if 5 respond to a quote request from the emissions system correctly then all 5 get equally 1/5 of an emissions unit.

  2. The emission unit is calculated on a 24 hour period. The total amount for emissions for the day be equal to the amount of token spent for uploads that 24 hour period.
    [EDIT: upon reflection a better algo would be to base the emission total daily amount on the amount of data store per node and network wide. That way the emissions will increase as the nodes show themselves to be storing plenty of data. Reduces the ROI for those running nodes to leech and shutting off 20% of the network twice a day]

  • this means that it will be real small to start with and only grows as more data is uploaded
  • the number of units should remain the same as it is currently - 200 quotes a minute.
  • the maximum amount total per day should be limited to the white paper curve, This means when the uploads per day exceeds the white paper amount the emissions keep to the white paper. Adjustments can be made in a year or two.

The effect of this is to remove incentive to run valueless nodes and as the network grows so does the emissions to help distribute the remaining token and also to encourage node runners.

In addition the shun algorithm needs to be continued to be worked on. Some sort of record checking of other nodes needs to be implemented or beefed up. Along the lines of (and maybe it is already and I am to dumb to see it happening)

  • each node iterates through its close records (the records it is responsible for)
  • the node then requests off the other nodes that are the closest node to the record a validation request. (sends a random value as a salt and requests a hash of the record including the salt, then checks it)
  • a table is kept and another node is given a black mark if it fails to responds correctly with the correct value.
  • all black marks can only be removed from the table if the other node responds correctly 3 times, or removed from table if it is no longer one of the closest nodes to the record (ie no longer considered)
  • once 3 black marks are given to another node it is shunned, full stop, do not pass go.
  • the table will have the max records number of entries (rows)
  • the iteration process will occur over a long period of say 10 records checked a minute max if the whole 16K records are all close records, and should never happen. It is expected a node will only ever have 8k records “active” and maybe less than 1/2 are close records, so 4K records checked every 12 hours. 6 a minute, maybe too much normally but its a start for testing purposes.
  • if a node being requested a validation check from doesn’t have the record, then it should post haste churn the record to itself.

Maybe a mod to that could be if a black mark occurs then that other node get the request again (new salt) in a short period of time like 1/2 hour. This will pick out the bad nodes much quicker than taking 1 to 2 days. If needed up the number of black marks needed to 5.

2 Likes

Marketing is easy (or unnecessary) if you have a good product. Conversely crypto is is unmarketable to anyone.

Crypto can be marketed as gambling/investment vehicles though. Many folks love that.

2 Likes

Not even sure what Emissions are, but it’s great to have a network running so we can continue developing our apps. (I just posted a huge SAFE-FS update by the way @moderators could you please approve the post?)

5 Likes

Even that is dying. Crypto is universally reviled by anyone who isn’t already heavily invested. Suckers are harder to come by than they used to be.

Just as well we ain’t selling crypto. As we are selling permanent data storage.

2 Likes

Figured I’d post the response from the team here for future reference: Nov 4 Update from Bux - #5 by Profess

This topic can be closed.

2 Likes