Excellent, many of us have felt something is off but have largely been ignored so acknowledgement is a big step forward.
I hope this is an opportunity to correct emissions and align it with network use and demand.
Excellent, many of us have felt something is off but have largely been ignored so acknowledgement is a big step forward.
I hope this is an opportunity to correct emissions and align it with network use and demand.
It’s really the same thing, finding ways to get the network launched focussing on developer programs to help developers. So far some temporary contractors were used and some of a devs wage and now this larger program as well. What’s missing here is BGF funded labelling etc. and that’s a mistake on our part, but yes it’s really what the BGF was to do but hopefully we can make it even larger as we push out. Of course a lot will depend on ANT pricing as well and that’s on us all to get right. The upload bugs are just annoying as hell, but we will crush those.
We did find an issue in NAT attached devices that is causing trouble and a very likely cause of some nodes apparently contactable by some nodes, but not nodes in general (so include all clients here). If this is the case then it’s likely the main culprit of things looking well in dev testnets and then failing in production as there is a config folk could have that is causing issues (port restricted and symmetrical connected routers). Anyway we are on that.
So again the BGF has been a backbone and it will do what it was intended for and get this network up with folk building on it. So thanks again man and know we are on this.
I seen the internal slack messages and would say that is loud and clear
a big issue, and I’m so sorry for not seeing/acknowledging - as you say, and just to confirm, we will take the opportunity and correct the course
Any ETA on plans being shared re: emission changes?
I hope a pause is considered, to recoup oversupply (and shake some leaches off the network!).
It needs to be addressed asap - Monday will be the aim. Autonomi has been live for just over a month so we also need to have a conversation on the impact of this - the emissions is planned over 12 years and so don’t want to do anything too extreme which may inadvertently create instability, uncertainty or upset - think there’s been more than enough of that already.
This is very nice to read.
Im not sure we will pause, but we will update contract to correct over a short term period (3-4 months).
Would like to be back on track (and true to the WP), come the summer so that Autonomi can provide a better experience for everyone; node runners, application builders and token holders - all of whom play a unique and important role in what will hopefully be by then a fully functioning and rapidly growing ecosystem.
I should have done the calculations when I got the write up, I didn’t - I am very grateful you did, and that I looked in here today and saw @josh’s comment. I’m so sorry guys
Does that mean that lower than planned emissions will be generated until then to make up for the error?
I have to say - it does, as ‘yes’ doesn’t reach character requirement, but the answer is yes
Ok, thanks for the info.
Hate to be that guy but isn’t this just simple maths how has something this important been allowed to happen?
You can be that guy and you should be. I am digging into how this was missed (despite what I now understand was a lot of conversation about it). The gap appears to be that the linear approach taken to coding the contract was believed to be correct by the guys doing it, combined with the fact that the issue in the community appeared to be more about the lack of earnings for smaller node providers, as opposed to the total earnings that were being emitted across the board. Ultimately therefore, it was down to a break in process, diligence and communication - and that is absolutely (and rightfully) on me.
Its a pretty big oversight but if it reduces the size of the network while the known issues are worked out and more nimble smaller operators remain for rapid upgrades, it is a blessing in disguise.
Lets hope it works that way.
Great attitude. Failures don’t have to be all bad if they’re transformed into opportunities to improve. Let’s hope something like the following gets added to add some rigor to the operation:
im quite keen to see lower emissions to make it completely uneconomical for a data center environment. and hopefully allow a base of home node runners to get involved to be the back bone of the network
If you don’t think you’re seeing errors, then there is no issue to observe or flag. So I think it’s less about tools than it is about process around implementation, and also diligence around issue investigation. In this case the ‘issue’ wasn’t just about the amount people were earning on nodes, or not as the case maybe. This perceived issue was tackled by David and myself on Wednesday Discord stages, but actually about the amount of token being granted/allocated in the first place, whether on a minute or two minutes basis. If that amount was judged to be correct then the failure relies in the deployment process and not the monitoring - although those knowing the WP could have seen the same (me), so having the tools but not using them is a valid take here. Lessons learned and apologies (over) due to you all for sure.
I’d argue that awaiting errors is too reactive. Tooling to verify correctness is a part of the implementation process as far as I’m concerned. shrug
I bet they were entertaining