With week one of Phase Two down, it’s now time to reset your nodes and help us test a new 4 MB chunk size.
All our internal testing to date indicates that this is the optimal chunk size for the Network, with it outperforming all other options in almost all metrics. And you can help us validate this!
32 GB Nodes with 4 MB chunks. Up from 512 KB in the last test.
What is the current recommended software to see how your nodes are doing?
I’m using Vdash and is shows that the current nodes I am running have been shunned much more that with previous testnet. Now it’s 2-10 x shunned due to “ReplicationFailure”. Is that OK, or not?
Paradoxically the node that is shunned the most, 10 times, is the only one having records so far. But I’m not sure if Vdash is still seeing things right.
I think vdash is seeing things right. I have much shunning with only 2 nodes running so far. Way more outbound internet activity and way more GETs and that is more data because of the larger chunk size. But the heavy outbound seems to have subsided now. So I think that was due to lots of nodes joining at once. I was starting these 2 nodes only 4 mins after the post saying it was live. I think it’s settled quite a lot now.
Given that the nodes have more records than normal at this early stage and that the records are larger I think this network started with more data in it. That then got pushed to the new nodes.
A lot of the records are 4.1MB. There are some that are just bytes and some that are 3.1MB or so but there are a lot of 4.1MB ones.
I have 1 nano already.
I see that the nodes are also keeping more connections open than before. With just 2 nodes I had 3,000 connections open. This will probably limit the number of nodes most people can run even if they have the storage. Maybe that is by design because with each node being able to store more not as many nodes are needed and there is value in keeping lots of connections open.
Another observation is that my AWS t4g.micro Instance that run 1 node on and is usually fine went bananas with outbound traffic and seems to have totally locked up. Can’t login and CPU is at 100% with no network activity now.
So if you are running on Cloud servers be aware CPU and network usage will be higher with these nodes and you may have to run fewer nodes. And watch your Data Egress costs on AWS!
And now my nodes are getting shunned. I could run 160 nodes with the 2GB and 32GB using 1/2MB max chunk size. But now with 10 nodes my upload is totally maxed out and now getting shunned by other nodes at a steady rate.
Example of internal tests being done on datacentre servers with optimised 1Gb/s or 10Gb/s networking between droplets compared to REAL WORLD internet. My internet is good compared to some other retail internet connections.
I now will have to dial back to 5 or less nodes. That means this 4MB max chunk size has crippled my internet and even though it works great on digital ocean droplets, its useless in the real world home setups.
@JimCollinson this network is for data centres and not home systems. No good for decentralisation. And 32GB (64GB real) node size is not good for home internet and 4MB max chunk is the death knell for home nodes. 8GB real node size or maybe 16GB real node size for home nodes is so much better and reduce the max chunk size to like 1MB or keep 1/2MB
I’m not even sure it’s good for datacentre based nodes! Or rather they will be being hammered in much the same way.
I see the network shot up to 70k nodes and then crashed down to 8k. I suspect that was the big boys starting their big rigs with the same number of nodes they were used to which immediately cause their servers running dozens or hundreds of nodes to crash.
Yes. I am only medium sized boy but with the last version I was running around 900 nodes, with this version all went crazy and crashed. After reset and new start I am barely running 300 nodes, I may need to go even lover.
It may be that we were just to aggressive too soon with the uploads before the network got to a size it could sustain it. We are ramping down the uploads to give some time for it it settle and breathe for a beat.
NOPE, 16GB per hour upload and barely 2GB download per hour for just 10 nodes
And you want real world then this is it. Slowing you’re slowing the upload of files now is showing real world is not working good
1/16th the nodes is not just a minor issue.
Larger nodes in 10 years when home internet has caught up to digital ocean is good. But if home nodes now are bad then what is the use of launching now and not in 10 years
We are crunching the numbers on it, looking at what happened… but something squashed the network at 1600 UTC, looking at what happened there. We started the ramp up then so that was my first hypothesis.
And I was just saying, that its not good. Its normal churning that is killing it. Up to 4MB per chunk means the load is spread across 1/8th the nodes as for 1/2 MB chunks
Just letting you know how its going for home nodes. The upload volume has not reduced still maxing out 35Mb/s link. Seems 10 nodes is my max number to hopefully continue without causing more churning
Build the future and bring along the present to have a happy network. No present no network to grow into
Remember Jim you can use a dual update method to increase node size/chunk size when the timing is better for it. No need to implement 10 years in the future now. In 10 years home internet will be more like digital ocean’s networking structure/speeds. My internet only works now because of of the tools and networking knowledge I have, no good for ma & pa trying to run nodes.
EU does not represent the majority of the world for internet connectivity. And is why starlink cannot make enough units to sell. They have provided tens of millions or is it 100’s now of dishes because the majority of the world doesn’t even come close to the EU
So not 20 million but more like 5 billion people cannot get an internet good enough for this network.
Yes, but it is logical that if the network is successful for 3 billion people, there will be strong pressure from the remaining 5 billion to improve Internet connectivity. Whereas if it’s the other way around, your internet will stay slow for a longer time because it will be “enough”…
Sorry because a few have no issues doen’t mean we should use that.
LOL there is already HUGE pressure, doesn’t need pressure from 10% of those who will run nodes since its already there from the majority anyhow
Autonomi is meant to run on as many machines as possible. Your reasoning limits that to the few. Even people from teh EU were complaining about not being able to run or have to reduce to 1/10th the number of nodes.
Make Autonomi work for the majority now and increase node size/chunk size when its good times to do so, that way you get greater adoption of the network
And its not 3 billion but those in the EU who are able to get the great internet that many seem to have. What a few hundred million?