We’re kicking off the 2026 weekly update, as expected with a summary of the main work-streams, in addition (and before we dive into that), we also want to share with you some broader strategic plans and decisions that will impact The Network and the way in which we are approaching its use in this rapidly evolving digital environment. This is only a summary introduction, it will culminate in a detailed update and deployment plan, for Autonomi 2.0, which will be published on February 19, 2026 (after the current stream of work related to the same completes).
The 20th of January, 2026 (this coming Tuesday), will be the last day of emissions (based on UK timezone), with the same being halted thereafter. An update on plans for the distribution of ANT going forwards, will be included in the publication that is set for release on February 19, 2026.
There are several reasons for this action being taken – but the overwhelming one is, now that early supporters hold ANT and that we have gained the needed learnings and insights we badly needed by having a product live in market, that we want to bring focus back to the broader proposition that ‘Autonomi’ was designed to deliver against. As it is today Autonomi 1.0 represents only a tiny fraction of the P2P environment and capabilities that we sort to and are now able to, deliver and scale.
In the interim (the period between Autonomi 1.0 running and Autonomi 2.0 going live), the only way an individual will be able to earn ANT is through the hosting of data. Until the rolling out of Autonomi 2.0 (expected early Q2), there will remain no guarantee of data permanence. The instabilities we saw particularly late last year and rolling into this one, have been caused by the coordinated activities (online/offline) of large numbers of ‘nodes’, although replication and timeouts have also created issues previously too. These aspects while continuing to be addressed are not helped by the fact that the majority of The Network now appears to be running old versions of code and/or custom builds (circa 67%). These operators maybe removing features or are simply updating the reported node version without actually upgrading the code, either way the behaviour is problematic. Removing (likely through AI) or not upgrading, means that improvements such as updates to KAD Query timeouts (for example) are not able to function and therefore cannot improve/support performance and permanence.
We’re hoping that the removal of misaligned incentives will go along way to resolve such issues, however it is important to flag that this private data centre component, of a decentralized ecosystem, will not be able to contribute (by design) in Autonomi 2.0. Please also note that we will be uploading data onto the current network to continue on with our testing.
The learnings over the nearly last year of the proposition being live, and the live testing phase before that, have been indescribably valuable and have led to some enormous advancements that we have yet to share publicly.
In terms of the current network (Autonomi 1.0):
Merkle Improvements (Better Ways to Handle Data Upload Payments)
We have extended the wait time to 2 minutes, and have added a new command-line tool that lets us compare what the user’s device sees versus what a specific network computer sees during a payment check. However, as mentioned above, we have also observed that almost 70% of the current network is running custom builds and/or old versions, which is likely exacerbating the issues that we’re experiencing.
To help combat issues we have also made an update so that if a network computer times out while checking things, it will share whatever partial info it has instead of nothing at all. This keeps the process moving forward instead of stopping completely, alongside this we have also simplified how a user’s device asks for nearby network computers and made it try again with a bigger group if needed.
These fixes should make uploading data more reliable and less frustrating for everyone - reducing failed attempts and helping ensure payments for storing data go through without issue.
Indelible Pre-Launch (A Tool for Organizing and Managing Data Uploads for Businesses or Teams)
A lot of focus has gone into improving the user interface and we’re now moving to full testing coverage on the frontend. The team have also been working through issues with single sign-on, inc, storing login info temporarily, limiting login attempts to prevent abuse, and error handling.
For Docker we have started hosting ready-to-use versions on a public site (DockerHub), adding ways to import and export a database (like backing up user info and data maps), adding further tests for the import/export features.
Mobile Bindings (Tools for Building Mobile Apps)
Finished a basic setup for iOS and looked into creating connections for C programming. Shared guides on how to use these from C. Tested the C connections with a sample project; they’re basic but work through auto-generated code. Added more guides and tutorials, meaning we have now completed this phase fully.
Code Quality and Other Updates
Reworked the code for uploading data on the user’s side to make different processes more consistent, added better ways to log info (record messages) during development, shared common settings, improved messages during retries (like “Retrying 9 chunks…”). While these are somewhat backroom and not that thrilling its essentially with this project having such history and so many contributions that we streamline, sort and better maintain our code, especially with AI needing to be able to successfully navigate, read and interpret it.

