250k nodes drop in an instant or something else?

https://network-size.autonomi.space/

From 450k nodes, down to 200k, and back up to 325k, in 1 hour?

4 Likes

Maybe reboot of the machine collecting statistics?

5 Likes

If that’s the case we should see the reported node count return to a similar level relatively quickly no?

Who runs the machine? They could check its logs for pertinent info.

Already mentioned in the discord. A restart. Also it takes time for the nodes the stats are gained from to stablise

1 Like

To be clear, a restart of the machine running https://network-size.autonomi.space/?

5 Likes

Yes, it’s not reflective of a drop in the network, but a drop in the individual node/machine collecting the data.

7 Likes

sorry - didn’t expect this to be picked up so fast :smiley: maybe I need to be more careful with my network-estimate-collectors :open_mouth: …

but interestingly I would have expected the same:

it’s just around 100 nodes or so that get combined to form that data and they have been restarted like 3h ago

funnily the new total seems to settle a fair amount of nodes lower than the previous estimate

I’m getting the network size estimate from the metrics server looking at the ant_networking_estimated_network_size parameter

5 Likes

Thanks for running the service and sharing the info.

3 Likes

The graphing software should be smarter than that. If a reset takes place the graph should not show a steep dive like that, but just empty space for the time of the reset. Then, it would be more obvious as to what happened.

4 Likes

if I ever get some additional time I’ll sort this :smiley: for now I struggle to get enough sleep and I’m afraid this is not at the top of my priority list … xD

(hey but that’s a really smart suggestion for a quick-fix; I’ll look for a parameter “live-time” or so in the metrics server and not record this data point in my monitoring when it’s too small :slight_smile: … let me see if that is a matter of no-time …)

ps: ant_node_uptime - my monitoring gets an upgrade :slight_smile:

6 Likes

All true,
but I would suggest @JimCollinson and the rest of the team have multiple conflicting priorities at this time and this is unlikely to be near the top :slight_smile:

1 Like

This ain’t our graph… we have our own dashes

3 Likes

Maybe a odd question, but wouldn’t it be great sharing this dashboard with the community? Don’t think there’s much confidential data and I’m sure there are a lot of brilliant people within the community that would find this information useful.

3 Likes

Hey guys, here is some data from our end! The top chart shows node count over 2 weeks, while the other chart shows over the last day. As you can see there was no significant shrinkage! Hope this clears it up! :grinning:

6 Likes

Yeah, we would like to, but it’s also critical for internal monitoring, so we can’t have it getting hammered with refreshes I’m afraid!

Hence the reason we are only sharing screenshots

5 Likes

Anyone who wants a good estimate of the current network size should use @bochaco 's excellent Formicaio GitHub - bochaco/formicaio: Le formiche sono insetti sociali che vivono in colonie e sono note per la loro organizzazione e cooperazione.

It has a good network size indicator in the top summary and also the figure that each node thinks it is seeing

Note on the second from right node above that it thinks we have >0.5 MILLION nodes !!!

3 Likes

I found it did too, seems to not recognise nodes that disappear and since its often they are restarted it kind of doubles up for a long time. So the estimate is counting that person’s nodes twice now. EDIT: not sure how long before they drop out of the bins, maybe days, maybe sooner for some and never for others.

@JimCollinson Does your graphs use the estimate as well, if so then it will be suffering from this. Those stopped nodes will take a fair time to drop off the estimate.

@riddim A good way to reduce this overestimation is to restart the 100 nodes every so often. Like have 101 or more nodes and restart one every minute. 100 minutes to sequence through them.

Or better have 110 nodes and restart one every minute in sequence. Then take estimate from the 100 oldest nodes. This gives 9 to 10 minutes for the newly started nodes to get a reasonable estimate again and this time more accurate.

3 Likes

Or is the bin calculation no longer valid due to the reduction in connections?

ha! back to 430k … was a slow increase in the end but seems to settle at a very similar level as before the node restart … (took 24h to get back to the old value now)

2 Likes