pretty massive upgrade, can’t wait to see the new network in action. Was wondering about all the apps that were built for impossible futures. will they work just as well on autonomi 2.0?
We are changing the backing client code (the api) so it will need builders to update. We dont have a full spec for what is changing, but we will communicate this out as soon as confirmed to get it into builders hands fast.
That is some update and a lot to digest! There was clearly a lot going on in that had to be kept behind closed doors.
Until I read down to the timescale table I thought we were looking at 5 years to implement all that! There must have been a lot going on in the background. If those deliverables slip a bit I think we’ll all understand it is the scale of the work.
I don’t know what is more impressive: what has been achieved already, what is being planned or the ability of the team and community to stick with it.
I’ve never been more optimistic!
It’s a girl! Committed by dirvine and claude 2 weeks ago:

thefae.com - Coming Soon - Created by Saorsa Labs, sponsored by the Autonomi Foundation
Hi team!
Below sounds fantastic! Has it been coded or will that happen the coming weeks?
The Network. In 2.0, native QUIC handles NAT traversal without STUN or ICE. No configuration. No technical barrier. A network where only sophisticated operators can run nodes isn’t genuinely decentralised, this update fixes that at the architecture level.
I will update AntTP as soon as the new libs are out.
IMIM (and soon IMIM 2.0), AntFTP, anttpmon, will all continue to work on top of AntTP then.
I’m hoping that the data types will remain largely the same, but with new dependencies etc. It would be good to get more details though.
Thx 4 the update Maidsafe devs
I use GraphEntry as dbc for Eddies and Scratchpad as their
superposition copy in other accounts, all thanks to you/Dweb and the Maidsafe devs ![]()
When your one ai generation away from coding whatever you want,
you worry less about a token that doesn’t get delivered on Autonomi 2.0
Meshnets, p2p anterrnet, decanterrlized anterrnet ![]()
Keep hacking super ants
This is a blockchain product. Hopefully Autonomi 3.0 will be the one that sheds the dead weight. Until then, I dont think we can say the original vision has been fulfilled. This product will continue being a work in progress until we get out of that ecosystem.
Could you let the user upload the data to the new network (if it’s public) and just validate that by checking the hash of the file on both networks, and then refund them for it?
That strikes me as a good use of the dosh that would have been wasted on further emissions
Well I did as far back as late 2024, not trying to claim anything there but it was an fundamental truth bound up in the basics of data communications that make up our current physical hardware technology. Software has a way to deal with it, but was not taken advantage of in QUIC. The max_window_size was way too high, as the default is for datacentre to datacentre communications where connections are kept for long time. QUIC adapts to ensure flow control works but only after trying the max_windows_size first. Since Autonomi makes a new connection for each record being sent (different nodes/clients require new connection) the amount of retries until QUIC adapts each connection caused the massive traffic as soon as each node had to deliver more than a couple of chunks on non-datacentre machines (they need a buffering home router).
Thus the massive traffic increase was well predicted by myself and spoken of a few times that as the number of nodes drop the increase in retry traffic would overflow buffers. I said this in a [rough] paper I wrote and posted in 2024 in the forum here.
So yes it was well predicted it would happened and called for the max_window_size to the optimised for Autonomi and home connections using run of the mill ISP routers. Took 1.5 years and the problem I predicted occurred as soon as the network was not primarily run on datacentre nodes.
If this is not fixed in Autonomi v2.0 QUIC then expect it to keep raising its ugly head. You need to include a test for retry count in your metrics so the problem can be measured. And for *deity* sake optimise the max_window_size in QUIC to a more reasonable value of sub 1/2MB. Its why when max record size was 1/2 MB we did not really see this issue much in the testnets using home connections.
It was a bumpy start and granted we aren’t live yet but I have faith in this. Once x0x and fae are live, I think the world won’t know what hit it. Even a simple chat app with PQC could draw massive attention.
What gets something to go viral is a pure mystery to me but I would love to see the network, nodes, and probably more likely the apps (bundled with network abilities) to blow up like OpenClaw did.
I think once things roll out and people have the tools in front of them, the network being permanent, easy to run and interact with, that everyone will be a developer in their own right. That changes the face of what the internet is.
What seems like wasted effort definitely feels discouraging and draining but how we react to that adversity also determines our future success. Sunk cost fallacy and all, sure, but I see it as sacrifices we made for valuable lessons. I personally plan on applying what I learned from what I sacrificed one final time, given things work as intended.
It might be a good month’s time to recoup and ruminate so we come back fully rejuvenated.
As soon as things are stable, I’m back to building my dreams.
Totally agree, 4.28 am (uk) not slept a wink, but feel fine in the knowledge that David most likely hasn’t even been to his bed ![]()
This show is well a truly back on the road, imo
That sound good, but I would add one more property to the list and that is cheap, on at least not expensive. Meaning cheap also on low data amounts, not only on huge batches with merkle. Otherwise the network is not for everyone, but for the wealthy, and for limited use cases.
For example, it could be affordable to back up all the accumulated photos from your phone in a one go, and too expensive to keep backing up them continuously. Solve that, and I am much happier.
Now, don’t read that in a way that I am screaming for native token, for native tokens sake. I am not. I don’t care what token, but that is the problem that needs to be solved.
Maybe that is what one of the “AI” agents is supposed to help with
What makes you think so?
But one thing I am sure stays the same, and that is:
![]()
We all have datacentre connections at our homes don’t we? Shame home connections are usually decade or two behind the latest comms tech. ISPs milk their customers and are a decade behind at all times. Like the tech for 10Gbps to 40Gbps to the home is possible but the costs are way too high (to the moon you might say) and the infrastructure is going to be at least a decade away (likely 2) for it to be common place. The world is still on average of less than 40Mbps uplink. The lucky sods with Gbps+ connections skew the strict average speed a lot in the upwards direction. But even so the strict average is 40Mbps up.
It only takes one Gbps connect to skew that average up to 80Mbps for 15 connections at 20Mbps. So the averages given by speed test sites does not account for this and gives distorted picture
Yeah, but your solution is too evident, exact and boring. Not worth even a try, so, try to come up with something else, will you? ![]()
The testing for it is not as simple as you’d think. At least for the test setups existing. It requires basically a fully home nodes network and a way to do controlled testing. Not so easy.
But since V2.0 is being developed I thought this would be a perfect time for them to do testing of the max_window_size down to 128KB even if only datacentre nodes to verify it does not destroy the network. Of course for single chunk being sent it will be slower due to requiring 32 ACKs during that time, but it also means that a potato router can handle 32 times the upload ability for the same hardware. The optimal value for home setups will be somewhere between 128KB and 512KB. I say 512KB because the testnets being run with max record set at 512KB worked quite well even with only a tiny number of nodes and plenty of uploads/downloads.
For home users it may mean the differences between their nodes even working when more than 2 or 3 chunks are being sent and not working efficiently and causing a wave of retrying until QUIC adapts for each of those connections. Interleaving also means that 3 or 4 chunks being sent will go up at about the same rate as not having a lower max_window_size and without the need for retries etc.
In any case home connections will force QUIC to adapt down to those low values anyhow so why not optimise it up front and save the storm of retrys like we have seen recently
