Really good news about emissions being halted, and ‘Autonomi 2.0’ sounds intriguing.
I’m excited to hear the details about future plans in February!
It sounds like a lot has been learned from 2025’s Autonomi experiences. Let’s hope the technical solutions for 2.0 enable the network to fly in a way that wasn’t possible with v1, and that the economic design for Autonomi 2.0 is sufficiently resourced to avoid mistakes like those seen in 2025.
I hope 2026 brings a more positive vibe & some optimism to the community too… it’s been pretty negative around here, and it feels like the only way is up in this regard
Great work all at Maidsafe. I hope 2026 is a far more rewarding year for all of you, and that you get to see the network really start to fly!
I love this update, community loves this update. There is a lot of love in the air, thanks @Bux and the whole team You really listened to us behind the scenes
Fair cheered me up after ending up temporarily in a wheelchair (long story but I’m fine), getting thoroughly disappointed by the soft-top Saab, I went to see. I really had my heart set on that one but sometimes you just have to accept an idea is shite and find another way. So I did and got something else just as nice
And then I come home and find I wasnt the only that pivoted direction today
More later, Im off to put some fast miles on this beast
I may be a bit slow but reading between the lines (see the comment about data permanence) it appears that there’s either a possibility or an intention to wipe v1.0 data and start v2.0 clean. If so that would nullify the questions I posed which haven’t been addressed.
@Bux@chriso please can you clarify if this is/is not a possibility or is in fact the intention? Seems an important point.
I would not exclude bugs in the released code causing oddities at times that appear to come from custom code No shade on anyone, just the realities of non-simple code.
It will certainly bring new dynamics. No longer is a node a fully independent entity as far as the code is concerned. It is now being categorised and up for exclusion based on some indicator, any bugs in this could exclude countries, or other common factors.
Will my 4 machines at home be considered too much? Even though they run a modest number of nodes? Maybe we will go back to 1 node per IP address and the node can have variable size (a major redesign for sure)
Interesting. And we will have to wait and see as Chriso seems to indicate that it is yet to be done (or finalised) with multiple suggestions/plans in play.
There is so much can happen once you categorise nodes according to some indicator and likely to be a lot more complexity added due to unintended consequences. Like ISPs that have only a few publicly seen exit IP addresses. Maybe computer fingerprinting will be done and cause issues when one brand or firewall or defence software makes all computers have the same fingerprint to twat trackers
Oh well we will wait and see, so many questions raised by this and no answers (either not known, or kept to themselves)
Speculation: If there is a data wipeout, it’s probably good the storage space has been so inflated, emissions so wasteful and the token price so low; people won’t get that pissed off about wasting a tiny bit of money. I would even forgive the team for not telling us in this particular case, as knowing we’re just space monkeys would influence our behaviour. But overall, I’m wary of falling back into optimism too much.
Good to hear back after a while though, nice update and all.
Agree. I’m probably top 3 of most uploaded (around 250 GB and counting), it cost me ± $100 in total. But there has been data loss several times already anyway and I’m expecting a huge data loss shortly after January 20th anyway. I would love to see uploading cost, even at 0% network capacity, to cost significantly more ANT than it cost now. Even though we want the network to fill and I get the mechanism, I don’t think permanent data should ever cost near nothing as it still is a burden on node operators (permanently)
I wonder to what extent this situation may be the result of a departure from the original assumptions of the network, because it is a well-known case of creating excellent assumptions for a project and then, during its implementation, deviating from or modifying these assumptions.
It often happens that when such an innovative and complex project is developed, many important elements that together form the system are taken into account at the beginning, - as is the case with Autonomi, - but during the development process, certain seemingly minor arrangements change and something systematically begins to deviate from the original plan. The result of this approach is that at some point we realise that we have deviated significantly from the original assumptions, which have begun to “take their toll”, even though all changes were made in good faith.
This is why creating highly advanced systemic innovations and inventions is so difficult, complicated and time-consuming.
IMO, node operators need to respond to incentives. If storage is too cheap, there are too many nodes. If storage is expensive, there are insufficient nodes.
Maybe the S curve for pricing could do with tightening at the extremes, if it is too cheap/expensive at the edges. However, I think the concept/design is right and we should start to see that as data fills the network and/or nodes drop (due to operating costs exceeding fees).