Weekly Developer Update: January 15, 2026

We’re kicking off the 2026 weekly update, as expected with a summary of the main work-streams, in addition (and before we dive into that), we also want to share with you some broader strategic plans and decisions that will impact The Network and the way in which we are approaching its use in this rapidly evolving digital environment. This is only a summary introduction, it will culminate in a detailed update and deployment plan, for Autonomi 2.0, which will be published on February 19, 2026 (after the current stream of work related to the same completes).

The 20th of January, 2026 (this coming Tuesday), will be the last day of emissions (based on UK timezone), with the same being halted thereafter. An update on plans for the distribution of ANT going forwards, will be included in the publication that is set for release on February 19, 2026.

There are several reasons for this action being taken – but the overwhelming one is, now that early supporters hold ANT and that we have gained the needed learnings and insights we badly needed by having a product live in market, that we want to bring focus back to the broader proposition that ‘Autonomi’ was designed to deliver against. As it is today Autonomi 1.0 represents only a tiny fraction of the P2P environment and capabilities that we sort to and are now able to, deliver and scale.

In the interim (the period between Autonomi 1.0 running and Autonomi 2.0 going live), the only way an individual will be able to earn ANT is through the hosting of data. Until the rolling out of Autonomi 2.0 (expected early Q2), there will remain no guarantee of data permanence. The instabilities we saw particularly late last year and rolling into this one, have been caused by the coordinated activities (online/offline) of large numbers of ‘nodes’, although replication and timeouts have also created issues previously too. These aspects while continuing to be addressed are not helped by the fact that the majority of The Network now appears to be running old versions of code and/or custom builds (circa 67%). These operators maybe removing features or are simply updating the reported node version without actually upgrading the code, either way the behaviour is problematic. Removing (likely through AI) or not upgrading, means that improvements such as updates to KAD Query timeouts (for example) are not able to function and therefore cannot improve/support performance and permanence.

We’re hoping that the removal of misaligned incentives will go along way to resolve such issues, however it is important to flag that this private data centre component, of a decentralized ecosystem, will not be able to contribute (by design) in Autonomi 2.0. Please also note that we will be uploading data onto the current network to continue on with our testing.

The learnings over the nearly last year of the proposition being live, and the live testing phase before that, have been indescribably valuable and have led to some enormous advancements that we have yet to share publicly.

In terms of the current network (Autonomi 1.0):

Merkle Improvements (Better Ways to Handle Data Upload Payments)

We have extended the wait time to 2 minutes, and have added a new command-line tool that lets us compare what the user’s device sees versus what a specific network computer sees during a payment check. However, as mentioned above, we have also observed that almost 70% of the current network is running custom builds and/or old versions, which is likely exacerbating the issues that we’re experiencing.

To help combat issues we have also made an update so that if a network computer times out while checking things, it will share whatever partial info it has instead of nothing at all. This keeps the process moving forward instead of stopping completely, alongside this we have also simplified how a user’s device asks for nearby network computers and made it try again with a bigger group if needed.

These fixes should make uploading data more reliable and less frustrating for everyone - reducing failed attempts and helping ensure payments for storing data go through without issue.

Indelible Pre-Launch (A Tool for Organizing and Managing Data Uploads for Businesses or Teams)

A lot of focus has gone into improving the user interface and we’re now moving to full testing coverage on the frontend. The team have also been working through issues with single sign-on, inc, storing login info temporarily, limiting login attempts to prevent abuse, and error handling.

For Docker we have started hosting ready-to-use versions on a public site (DockerHub), adding ways to import and export a database (like backing up user info and data maps), adding further tests for the import/export features.

Mobile Bindings (Tools for Building Mobile Apps)

Finished a basic setup for iOS and looked into creating connections for C programming. Shared guides on how to use these from C. Tested the C connections with a sample project; they’re basic but work through auto-generated code. Added more guides and tutorials, meaning we have now completed this phase fully.

Code Quality and Other Updates

Reworked the code for uploading data on the user’s side to make different processes more consistent, added better ways to log info (record messages) during development, shared common settings, improved messages during retries (like “Retrying 9 chunks…”). While these are somewhat backroom and not that thrilling its essentially with this project having such history and so many contributions that we streamline, sort and better maintain our code, especially with AI needing to be able to successfully navigate, read and interpret it.

51 Likes

Looks good. Hopeful for the future :slight_smile: Great to finally see the incentives issue being fixed.

17 Likes

Im feelung better, bye emissions.

@Southside , howd they do?

13 Likes

Omg I’m early, third!

10 Likes

F@#K YESSS!!!

Thank you, Thank you, Thank you.

25 Likes

Love the enthusiasm. Really agree though, I had the same initial response.

20 Likes

The dream is back! :smiling_face_with_sunglasses:

18 Likes

This is great news! It took a while, but it looks like they’re learning from the hard knocks encountered this year. I’m looking forward to February 19th!

17 Likes

Emissions are soon to stop completely :partying_face:

Quite a U-turn, very popular in UK these days.

If this means - no data center nodes at all - this is a radical change and welcome in the long run to decentralise and democratise the network, but :

  • technically, how can data center nodes be excluded?
  • how will sufficient nodes be provisioned (ex-data center) when so many are currently run from data centers?
  • how will the transition from 1.0 to 2.0 be made so as not to lose data as node numbers drop dramatically?

Anyway, this is an exciting start to 2026 and sounds positive if I’ve understood what’s being said. :clap:

26 Likes

We are past being concerned about that one for now :rofl:

14 Likes

Get those machines switched back on :joy:

14 Likes

I mentioned on discord, but perhaps would be good here too:

1:

It will be interesting to see the dynamics of needing storage when the network is full. Does it mean there will be the ability for nodes to be in waiting, offering storage and just waiting for them to be filled, and then paid? Or would we have to fall back to some incentives system that just kicks in when there is less than 10% free space on the network.

2:

I guess we might be able to transfer all current data by offering free payment to transfer your data. You might be able to prove you own any private data via some tx or hash. That’s if there is any need to start somewhat fresh.

3:

I do like the idea that instead of emissions, just use some of that to pay for public data to be uploaded. Perhaps via grants to the internet archive, or to people that wish to make certain copyright free material accessible forever. I know there might be legal questions, but it would still be a good option to look into, and it might get some cool data to showcase!

6 Likes

The new tech will prevent close groups being formed from the same network or geographical location.

So people would still be able to run nodes in a data center, but a few individuals won’t be able to occupy significant percentages of the network with large cloud deployments.

Edit: sorry, I won’t follow up with any more details than this. I won’t know the answers at the moment, and would need input from David.

26 Likes

How will this tech prevent things like vpns and proxies from being used to obfuscate geo-location?

8 Likes

See my edit.

Sorry, I won’t be able to follow up more on what I’ve said.

11 Likes

@chriso thanks for answering what you can. Another promising aspect of v2.0 perhaps :+1: No further replies needed.

So this is a limit on nodes per ‘location’ (e.g. per IP address and potentially other markers). I suspect easy wins against most ‘bad’ actors in the short term, but that reducing incentives such as by stopping emissions will be necessary too.

Because if emissions are significant it incentivises the construction of multi-location deployments by those willing to do so. I don’t believe is that hard if someone can just deploy containers to multiple data center locations and services and I expect the big bad boys already do that anyway.

It will be good to have at least a test period with a much higher proportion of nodes at home than we’ve had since last March.

It’s hard to understand how we got to a situation of 70% bad nodes before deciding to stop emissions. It was apparent early on they were counter productive. I see that having a large network was also beneficial for ironing out bugs though, so maybe this was useful in spite of appearances, and perhaps by design behind closed doors.

20 Likes

Great update! Thank you to the team, developers, testers, and the entire community involved for your hard work in 2025. :clap: :clap: :clap: :ok_hand:

I sincerely hope that the changes will resolve the issues that have been multiplying recently.

@chriso, we currently have 1,837,000 nodes. I wonder if, after implementing changes to prevent the formation of closed groups of the same network or geographical location, the network will not be deprived of a large number of additional nodes? Will the network be fast enough after these restrictions are implemented?

13 Likes

Great start to the new year, all the best to everyone for 2026.
Avante!

12 Likes

12 Likes