4MB Chunk Network LIVE — Reset Your Nodes!

Jim for me, things stabilized to the levels of last week’s network and I launched another 300 nodes.


Check out the Dev Forum

2 Likes

Once churning reduces so does the effect, but is still significantly higher. But of course in the real network there will be churning as people turn off their computer for the night, or do windows updates etc etc etc

The 4MB chunks do cause more B/W usage per node involved in the churn. With 1/2 MB there was 8 nodes uploading compared to this network when a 4MB chunk is uploaded. So uploading load by nodes is more spread out with smaller chunk sizing.

This is fine with better upload speeds, but for average (world wide weighted by 10G/5G/2.5G/1G connections) then it is very noticeable compared to previously. For DO droplets then the difference between 1/2MB chunks and 4MB chunks is minimal, but for home nodes on sub 100Mb/s up it is very much noticeable and can peak out routers much quicker.

3 Likes

That’s good.

Yup, letting things take a breather while we analyse/understand the crunch yesterday.

Will keep you updated through the day as we learn more and progress.

3 Likes

ram usage too? - I’m currently seeing 25-45% Ram utilization (by the whole system…) with 10% of nodes running

do we keep the same number of chunks in ram now as cache while the chunks are way larger?

I was thinking today @JimCollinson that this configuration in the wild reminds me of a control system that has too much positive feedback and when one of the inputs changes by a tad too much that the other outputs start acting wildly hitting limits.

In DO droplets test the upload speed is high enough that this effect doesn’t appear, but once in the wild with most people having 1/10th or much less the upload b/w the effects caused by delays and b/w hitting limits causes follow on effects in other nodes.

Mostly see this when churning is happening a little bit too much, when nodes were being added in too large a quantity and then get booted by the net or switched off because of b/w limiting and user turns them off caused this effect.

@JimCollinson Also remember that the ones having the best experience are those with quite large network resources, like Dimitar who has 600Mb/s upload. The world average is 48Mb/s upload and that is lopsided high because the 1Gb/2.5GB/5GB links pull the average up because it takes 100 x 10Mbit upload to match 1 1GB. We need the mean and not average. In any case the average person gets swamped if the network every gets a little perturbed and then positive feedback causes things to go wild. We can demonstrate that if one of the big multi thousand node operators would just turn off their nodes within a short time

Smaller chunk size keeps this positive feed back better in check because 4MB being retrieved (by churn or client) is spread across 8 nodes (1/2MB) or 4 nodes (1MB) and thus the hitting of the limits of B/W will not happen anywhere as much

4 Likes

you mean median?

not sure this works in the current network … because it still is by far not full and chunks only get deleted when nodes are approaching chunk limits …

…I would offer to pull the plug - have a pretty responsive system … but at this second I only have a couple hundred nodes … :wink: and am by far not one of the largest … if @Darius or @Sooris would pull the plug that could have an effect :smiley:

2 Likes

You may be right, but I think we should get all info we can gather about what occured before drawing specific conclusions. There was a cascade effect, and not due to a single issue.

Being analysed at the moment.

4 Likes

Welcome to the world of control theory. Control systems is a whole field in itself and engineers are always working with them

Yeah, if you could keep things humming for the moment that would be appreciated. We may well resume uploads, but once we see things stabilised and stready growth. Which is looking ok at the mo to my untrained eye.

1 Like

The prediction here is that we would see a repeat of earlier or worse and the network goes belly up in cripple mode

actually I think a mass join would be the worse event

join → new nodes need data → old nodes upload like crazy and decrypt + do stuff

1 Like

It uses more ram, but not much more. 600 nodes:


Check out the Dev Forum

3 Likes

hmmm - 300-350MB per node or so

… possible that the effect on your system is smaller because you already have/had huge nodes because your machine is so large and larger machines lead to larger nodes in the past …
…usually I was having nodes around 100MB … and now they’re 150-250MB from what I’m seeing … that’s not supercool …

1 Like

Ah, the memory increase is expected due to the record cache held in memory. Set to 50 records.

50 x 1/2 MB is 25MB max cache size.
50 x 4MB is 200MB max cache size

On the big machines that had 300MB anyhow it might not be noticed since the OS allocated much more RAM anyhow

4 Likes

WTF :face_with_symbols_over_mouth:

and nobody cared to mention this 170MB additional memory allocation per node? xD

for 20 nodes that’s already 4GB ram (?)
so raspberry pi’s are dead now as node runners :smiley:

Go Energy-Intense Large Machines!

…but even for my powerful large node runners that’s hell of a Ram Usage … going to 256GB in the cloud is quite expensive …

…guess I need to switch to self-built to decrease that stupid cache size in the future …

6 Likes

My RPis are 8GB, so suffer, you should have gotten 8GB versions, get with the times

/jk :rofl:

3 Likes

price difference between the 4GB version and the 8GB is just unreasonably high … and who would need 8GB Ram on a rpi xD …

1 Like

I do for my old computer replicas / simulations

2 Likes

I used to use a PDP10 a lot in the seventies and someone did a replica of the control panel with the lights and switches and a injection molding of the actual control panel. Saw it and had to have it LOL. So much time spent on that machine, it was a real workhorse

3 Likes

Who deployed the shun gun?

I have woken up to a catastrophe and the hurricane hasn’t even arrived :exploding_head:

7 Likes