good box for a 3-5Gigabit ISP biz connection, if you can get one in your area.
Already have a 5Gbps so all good.
Just curious, whats the highest number of antnodes folks have been able to run on a single container/VM/physical machine out there in the wild?
Feel free to share your hardware specs off that machine too.
I am attempting to run 2000 to 3000 antnodes just now on my single Intel E5 server⦠will provide an update if successful.
I suspect even the beefier modern 5nm AMD EYPC CPUs can probably cross 10K nodes on a proper dual socket server?
.
Once the hotfix is released, likely then 50/8 * 500 = 3125 nodes before CPU threshold limitations.
I have removed the graphical environment and access my machines with this, there is a built-in terminal through the browser:
There is also a usability graph:
Check out the Dev Forum
Managed to hit 2000+ nodes on this host (new record for myself).
Probably can get to 3000 to 3750ā¦but will try that test a bit later, as it took 4+ hours to spin up 2245 antnodes using 270 GB RAM. NAT session tables less than 200K entries on the router.
313 - 240 watts = 73 watts additional usage for 2245 antnodes = ~32mW (nice!)
Most folks arenāt going to run 50/50 split off nodes in two different configurations at a single location, so the data to compare properly will be limited (all else equal).
Autonomi does run --home-network nodes but its 1:10 ratio compared to the remaining public nodes on DO. They are both earning, however, private nodes currently show slightly less payment received count average value. The comparison isnāt valid as the sample size for private nodes is very small here compared to the public nodes.
Having said that, both configuration do earn, though its best to do port forwarded than --home-network. As stated before, --home-network is the last preferred option, port forwarding and UPnP is the preferred method here.
i.e. With --home-network, why be dependent on other nodes more than necessary for your communication/transport requirements, as its yet another layer, where the things could go wrong, resulting in potentially less earnings.
Thanks for the reply! Iām now looking at some different setups. Found this one:
HP Proliant DL380 Gen10
- 2x Intel Xeon Gold 6146 3,2ghz (12c, 24t p/s)
512gb DDR4 RAM (16x 32gb)
Room for 8x 2,5 inch disks. No brackets included
Onboard SATA controller
Power supply Redundant 1600 watt
1x HPE Ethernet 10Gb 2-port 562FLR-SFP+ Adapter + 1x HPE 562 SFP+ Ethernet 10GB Dualport Adapter
For ā¬1250, this comes a lot closer to your 2699 v3 setup. Not sure it more than 512 GB ram would be needed looking at your current numbers. What do you think?
A dual socket Intel Xeon Gold 6146 (24 cores total) is roughly 7% slower in performance compared with a dual socket E5-2699 v4 (44 cores total) (Intel Xeon Gold 6146 vs Xeon E5-2699 v4 [cpubenchmark.net] by PassMark Software).
Having said that, the re-occurring cost due to TDP requirements is slightly lower in my setup, though my CPU is 2 years older (release date).
If I can run 2000 nodes at < 50% CPU, with an average memory requirements of 130MB per antnode, you are right under 256GB RAM. Say at a future date, if and when CPU limit threshold is altered or removed, 512GB of RAM should suffice to make maximum use off your hardware (both on CPU and memory front). Donāt forget with 2000 nodes, you need up to 125TB off storage capacity too, though that limit will likely never be hit (network being full).
However, I went with higher memory just in-case the requirements change off antnode, choosing to maximize the density of RAM sticks in the system, because if you have to increase RAM in future and no spare slots, you will end up throwing away the old RAM sticks (wasted $$$), so I opted for highest memory capacity per slot right out of the gate. Also, I have other home lab projects VMs that require higher memory in general.
One is into DC Grade HVAC to keep that EPYC cool in dual socket mode, as such one will need air cargo handler grade noise suppressors to cover oneās ears if the fan systems are average quality. Either that or one employās liquid cooling. ![]()
I just did a quick check on RAM per antnode after the config settled, seemās 130 MB per node is tight for UPnP? Thoughts? Do you know if there is a different antnode RAM consumption usage vs. Port Forwarding?
I am on an LXC Alpine container, however, for 2580 antnodes now, its at 315 GB RAM = 125MB RAM per antnode (port forwarded). This is running with custom modified code base at home with a recompile to reduce file descriptors (soon to be released to community early this week) (PR: Avoid scan entire sys during cpu threshold check no sysinfo upgrade by maqi Ā· Pull Request #2639 Ā· maidsafe/autonomi Ā· GitHub).
However, for just 4 antnodes in your screenshot, the above PR wonāt make a big difference in memory usage. I do not think the memory difference will be much different between UPnP vs Port Forwarded, but I am not running UPnP at home so I cannot accurately comment on that.
I think for antnodes 44 cores would be much better than 24 cores. With thousands of nodes, there will be significant CPU time spent on task switching and less nodes per CPU core means less time wasted on that.
Benchmarks typically run one thread per core, our use case is far from that so real CPU performance for nodes may be far from universal benchmarks.
I donāt have enough HW to do more tests, but I have AMD 8-cores and one 12-core of the same generation and the 12-core performs better than 1.5x of the 8-core.
Sure, I agree there. I took only what was known (a generic multi threaded passmark score), else I have no other means of roughly estimating or extrapolating a suggestion for @anon75844067 . Its a start as oppose to real world benchmarking off antnodes on every hardware, and its impossible to do so prior to procuring the hardware itself.
I am keeping an eye on context switching, playing around with pinning the antnodes to rotate between 2 cores (bitmask) each based on 22 buckets for a 44 core machine etc, and see if the CPU can be lowered some more without causing an increase in shunned or error rates at home.
I would not like to see the memory footprint off an antnode pass or cross boundaries across the NUMA sockets here either.
The 4 nodes is my notebook, I am working on 11th and 12thgen Intel CORE based i7 NUCs, to see what I can run using Alpine , in the base config 32GB RAM and 1 TB (x2 512GB) storage with and without(headless an LXDE or similar Window Mgr. Current ISP Max is a 1Gigabit planā¦, I can expand these NUCs to 64 GBRAM and 2TB NVME SSD Storage max⦠, so thatās my challenge⦠There are alot of these NUCs scheduled to fall off business leases in the next few years⦠SSD wear will be light for many of them. Iāll take your advice and run the numbers⦠ty.
@Shu These two NUCs will get a special Alpine build with a LKM insmod and phy media format to increase durability, reduce power, and also speed up write.
Q? what F/S are you running?
I am running Ceph FS at home, where I present a mountpoint to the LXC container.
After some more tweaks, here is the latest numbers after it took 6hrs+ to startup 3000 pids:
3000 Nodes at 300GB RAM at 340 Watts power draw within Alpine LXC.
Therefore:
102MB RAM per Antnode.
(340 watt current - 240 watt baseline ) = ~33mW per Antnode
LXC CPU usage at 62% (artificially increased temporarily due to higher density testing)
Do you have an estimate of how much of that baseline is now being used to run nodes.
The baseline has the cpu still running 100% doing work, prob at lower cpu frequency. But now for 62% (just using that figure of yours for an example) of the time the CPU is running nodes. Thus 38% doing unrelated system work. IE approx 38% of the cpu baseline power.
I would now expect the baseline of non-node work to be lower than 240 watts by a not an insignificant amount.
Wonder if you are able to do an estimate. Is the 240 watts baseline the CPU power of total system power, since youād need the baseline cpu power used to get that estimate.
Not sure if I followed completely, but in idle steady state, the machine draws 240 watts with 0 antnodes and nothing else running on it (i.e CPU < 5%).




