Used to bomb about in my late 20s in my Buddy’s Blue w/ white racing stripe Mercury Capri which was imported from Germany to Canada and the USA. It was a beast, quite fast to accelerate actually for the times. a whole lot of used car in 1984 for not a lot of $$ it was a 1973 model, automatic…
Having been in the open source space since 1999, wow has it morphed.
Today, Projects of worth all have big FT500 corp dev/QA parked at their edge, it’s what keeps many of these worthy projects going. OMV OpenMediaVault is a good example, the author’s day job is with a big software company, I’m really no different, I work in the day for small boutique commercial software shop, there is just really three tech people me included, and two commercial types, it’s called CloudProx
We are working in Linux kernel, LKM ‘insmod’ lands and driver space, making flash go fast, last longer and use less power, with just two pieces of patented code, that’s it, one an LKM, the other media formatting tool , to get that value to market there, the rest of the actual solution a few of our customers use is all open source, where many of those project are also supported by big companies. Without open source this unique set of values would not see the light of day, especially in a big company.
So yeah the Chancers come, and now the network has ballooned , when in reality, these types really earn a pittance if you do the math from emissions,
Their eagerness to earn a quick buck has floated an ‘Autonomi Balloon’ to gain the project attention/some initial awareness, and has probably had the hackers come too, to try and crash it (they failed)
so be it .
The ‘ballooning of network size’ has revealed weaknesses.
It’s all good at ginormous scale,
I say bring it on, as the ‘Chancers’ will partly fall off (or get caught stealing Colo or SaaS resources and fall off that way due to lack of earnings , then we see what happens in that shrinking of network size use case. also useful.
The key is supporting the team(all of them, not just devs & QA) to help them improve in both cases, to ruggedize Autonomi even further across a both network growth and shrink use cases (at different rates).
Autonomi Network needs to be ready for uploads at scale and in rapid ‘up and down’ node count use cases, at different rates of acceleration, deceleration.
imnsho SafeNetwork/Autonomi Network is truly a great piece of bottom up, grass roots engineering and new think, which is just now stepping out of applied research and into early commercialization, after what has been imo, an earlier and much longer and necessarily slower deeper research period, with lots of technology evolution distractions. Kudos to Mr. Irvine and the Maidsafe team past present and future for course correcting, and same to everyone reading this forum over time long, or short.
It’s inspiring in so many ways beyond the tech think we find ourselves in everyday to see everyone more or less pull together (especially so, when we pull in sync).
Emissions, yeah is a marketing instrument for sure, with lots of good unintended consequences producing new learning which powers new think to make everything better.
Now get uploads working in the above growth/shrink set of use cases, both slow and fast and this project is golden.
Robust Uploads and Downloads together @ scale in any deceleration/acceleration of antnode count will be the V2 (right velocity and takeoff vector) moment for Autonomi Network.
We all need to keep talking about it, telling everyone that V2 moment is getting close, to keep the momentum gained.
n.b. These grow/shrink challenges at different rates have been solved at even bigger scale in the event messaging space , in so called ‘Event Driven Architectures’ using plain old ip addressing with patterns, KAD will get tamed too in this regard, likely the same way. I encourage the team and community to take a look at www.solace.com to help get the creative juices flowing in this regard. They cracked the @scale problem 10 years ago for OSI Layer6/5/4/3 event driven message networks on IP , without Gossip.
The other stuff is blocking and tackling.
If that’s the case where did all the emissions go?
Dead nodes, someone made bank

where did all the emissions go?
They would only go to nodes that respond so thats not a issue.
Any comment?
Do we really only have 3-6m nodes?
Looking at the emission I’ve been getting with my amount of nodes I would also estimate the network size around 5 million (on the emission earnable node version). I would assume people not earning will turn off million of nodes pretty dam fast.
My brain hurts much less now. Going to sleep well tonight
It just couldn’t compute, the effort, manpower and resources to get 50 million nodes up is immense.
Yeah. And from another angle it’s interesting how we all believed the numbers.
Well we knew something was wrong and blamed emissions because it would have been the only understandable reason for such madness.
Yes and it’s totally understandable to believe.
But just in an analytical thinking, how did that belief go so long unchallenged? If I would had to take just someone’s word for 50M nodes… No way! But now that it was computers reporting it, and advertised in Autonomi site… We trust in tech in a way that’s a bit problematic.
Also, as scary as that big whale was, it was still sort of flattering, too. I wanted to believe.

It just couldn’t compute, the effort, manpower and resources to get 50 million nodes up is immense.
The smart thing to do is convert that problem into a different one: how to recruit, manage and retain a small team of Linux, networking, automation, monitoring and either Cloud or Datacentre engineers. Which itself is a headache (believe me) but one that is a lot more approachable. Because it’s too much for one person or even a couple. But imagining how to set it up and run it is fun for someone who works in enterprise IT.
No, I’m not the whale. If they even existed. But it would have been fun!
I think you could do it with 5 but they’d have to be smash it out the park experts in their fields. More like 10 for coverage, 2nd opinions and resilience. 15 for 24x7 coverage.
I’ll be somewhat more settled- rather than satified - when I see a post from the devs showing their estimate of network size and the numbers and % of each antnode version.
And an unambiguous statement as to which versions of antnode are entitled to receive emissions payouts.
Are we supposed to believe the last few weeks of mega numbers of nodes was a collective hallucination/delusion?

% of each antnode version
grep "Fetched peer version" ~/.local/share/autonomi/node/antnode13/logs/antnode.log →Current: 2025.1.2.13 Previous: 2025.1.2.12 Majority of nodes: 2025.1.2.11 Some of them: 2025.1.2.6 cave:~$ grep "Fetched peer version" ~/.local/share/autonomi/node/antnode13/logs/antnode.log | grep "2025.1.2.6" | wc -l 149 cave:~$ grep "Fetched peer version" ~/.local/share/autonomi/node/antnode13/logs/antnode.log | grep "2025.1.2.11" | wc -l 1723 cave:~$ grep "Fetched peer version…
It was not, there have definitely been many mores nodes online before the latest (now latest two) version requirements to receive emission.
My earnings went times 3 since latest version was required (Going from massively unprofitable to slightly profitable). This would indicate the network at its peak was around 15 million nodes, given that the different version earned around the same.
My educated guess would be that someone just committed in the early stages to push out several node runners, taking advantage of the fact that an older version used less resources then the updated version. This made him run much more efficient than others, since you can not get away with that anymore, he probably shut down most of his nodes now. I welcome this change, not just because running nodes can now be profitable again or at the very least not lose money but also because the network at this stage just needs people committed to updating their nodes.
We really really really need the network to fill to finally move away more and more from datacenters and see the network as intended, with nodes using spare resources.
Uploads are a good bit smoother - 1 retry on the second track
Try my uploads !!!
bastard_sons_of_johnny_cash-the_road_to_bakersfield.mp3
Data Address: dee0a8fabeeca8824a3fa03917fdce6726c7749b037ea8fc6bbb8bea6126c9f0
I Really Don't Want To Know.mp3 - Jason and the Scorchers
Data Address: b8277991657ebed20c5313bbbcd9cd74d5ff83cfa7eb68e63f07f00125b0344c
Caution - may contain twang!!!

b8277991657ebed20c5313bbbcd9cd74d5ff83cfa7eb68e63f07f00125b0344c
Same for both:
🔗 Connected to the Network Error:
0: Failed to fetch public file from address
1: Is a directory (os error 21)
Location:
ant-cli/src/actions/download.rs:164
I just ran the download through mpg123.
Mrs Southside came through to ask why I was dancing madly…
#WorksForMe
Try this command, it worked for me, took 2 min 15 s to download:
ant file download -x dee0a8fabeeca8824a3fa03917fdce6726c7749b037ea8fc6bbb8bea6126c9f0
audio.mp3
If you want to time the download then add, time , before the command.
Amazing to see downloads starting to work in a network with millions of nodes.