Node count dropped below 8,000. I assume bad or stale nodes are being dropped from the network, but at what point does the node count become un-sustainable?
Node count is already to low to maintain data redundancy.
There is a clear lack of protection to maintain a fully decentralized network, a single entity can spin up nodes, run them for a couple days while simultaneously uploading a few tb of data, shutting them down all at once (while having a majority stake in the network for a couple 1000usd perhaps, at most) and crash the entire network at will.
We have seen this network behavior/malfunction before and i doubt there is a solution implemented. Especially now that we are seeing a very small network, its incredibly easy to attack.
Team, please enlighten the community what your solution is, and point out your peer reviewed research.
I am sure they will tell their solution in about a week when plans for Autonomi 2.0 are published.
In my opinion there is not a node count that makes it become unsustainable. In fact, right now, the node count is too high which makes it economically unsustainable. Does it bring certain vulnerabilities? Sure does. But lets face it, in the first days of Bitcoin you could do a 51% attack running $5000 worth of GPUs. We need to scale gradually and organically, driven by real demand.
For me, this was the most important factor to start uploading as soon as I can to fill the network with data. Once the networks fills, ANT cost for uploading will increase, making it more likely for people to start running nodes and decentralizing the network further.
And do keep in mind, we might have 100 people actively running nodes right now, we’re at such an extremely small scale. The node count will go up, and the ANT price will go up along with it. First a 1000 people running nodes, than 10.000. And we’ve accomplished our goal when we’re at 1.000.000.000.
The node count is actually too high for the amount of data stored. That’s why the rewards are so low. We are at about 10% of capacity.
If you want more nodes, there needs to be more data. They go hand in hand.
Chris mentioned that we are at 6,500 nodes on the network. For our own internal nodes we have seen resource spikes for the last 18 hours. He also mentioned there is no data loss indicated on the small number of addresses we are tracking. He will add more addresses soon. He also said that download performance has been very good and fast and our uploads are going well.
I can confirm. 20 GB download in under 20 minutes. No data loss. Uploads still going great. I’m seeing a lot of activity at the moment too. Must be people trying to get in some cheap uploads.
There is an aspect that might be coming into play: people having to shutdown nodes because of the network capacity being used. I’ve had to trim from 30 nodes to 20 because my 80Mb/s down 20Mb/s up bandwidth was being intermittently saturated. (I’m getting an upgrade soon but that doesn’t help just now.)
Saturday and Sunday was me having to reset everything and upping the node count by 5. It was at 30 by Monday morning. Tuesday I reduced to 20 but the traffic is higher now than it was then.
This is right now:-
If there are no howls of anguish from the rest of the household tonight I’ll let the 20 continue but if anything is disrupted I’ll have to reduce.
I’m going to look at QoS on my MikroTik and but from what all I remember from before is it’s complicated. But that isn’t an option in my friend’s houses where I’ve put RPi4s. I’ll have to casually ask how their internet is doing! If I wreck their internet life they will just unplug the devices and it will take some work to get the trust back.
I can imagine there are other people in a similar situation but with bigger numbers.
I can also imagine that we could end up in a vicious cycle of people having to shutdown nodes because of internet usage which makes the existing ones busier which causes more to be shutdown.
This may already be what’s happening, but it could also be a sign that all the testing of an almost empty network is invalidated as the network fills up.
Maybe 35GB (or whatever the figure is) for node capacity needs reducing considerably, or maybe there’s a fundamental problem with this implementation and how much bandwidth it requires.
As the network has been so unrepresentative since last March, it’s hard to assess viability at all.
It would be great if the team were more open about what they’re seeing, how they understand what’s going on and what they have in mind going forward - as it once was - but ![]()
We’re left guessing and it doesn’t build confidence.
I believe in an ideal situation, we all are just hosting a single node per device. The bandwidth and storage usage should be high enough to make it expensive to host multiple nodes, but we also want every device to be able to participate in this network, including phones and possibly small, low power devices, especially on weaker networks.
Centralization of nodes on a single device is good for testing the network, but ultimately we want it to be as decentralized as possible.
Or a minimum ant store cost?
I’d take that over the old incentive any day.
Perhaps the reward curve could be tweaked, rather than hard coding a minimum cost. At the extremes - which we are mostly still dwelling in - it will always feel a bit strange. As we tend to a more average fill rate, I suspect we will be less concerned.
It’s a good experiment either way.

