I am a bit surprised to see people scaling down still. There isn’t really an incentive to push for the max node count so why be on the limits of your bandwidth or even close to it? I’d say, scale down so that during peaks you don’t get above 40% resources used. Let the nodes fill up and lets see if we find more unwanted behavior. Keep in mind, by running as much nodes as possible, you’re subsidizing cheap uploads, something we were so happy to get rid of. Also, continuously having to scale down as the network fills adds to the load of the network during peak times.
Right now, we need decent, persistent nodes on the network and see the network fill up. There is nothing wrong with scaling back up when demand is there and it makes economical sense.
It looks like we’ve also sadly got stuck with our upgrade again.
~78% are on the new version, but looks like ~21% are stuck.
I wanted to ask @VaCrunch, are your Windows nodes upgraded? One thing I’ve always wondered is whether the lack of auto upgrades on Windows is a contribution to whether we get stuck. I know you’re running a lot of nodes, so it would be good to rule that one out here.
That might account for it then. Sorry, I know it’s a PITA to upgrade manually. I am really hoping to release the Windows auto upgrades feature on Thursday.
Actually, you may want to hold off on an upgrade until then, because after that, you won’t have to manually upgrade again.
It’s good that we can account for why we’re stuck. Thanks.
I honestly wasn’t expecting to be using this much bandwidth. Back when we had 1.5 million nodes, bandwidth was not an issue at all. Now that we are at 2500 nodes, my ISP is probably curious if I have become a DDoS botnet
Maybe this indicates the need for nodes to regulate their own resource use and even shun ones that don’t keep their communications within sustainable levels.