Best Safe Node hardware

Best thing is to create the best network possible, not to have as many participants as possible or if the lawn mover can run nodes.

To have as many particiapants as possible running on strange devices seems like a sweet dream but there is never some form of deep analysis or thoughts on what that would mean to the network or how the devices will be affected and so on. In the real world when things hit the fan then there will be consequences and effects from that, think about that, otherwise we will be in for some uggly surprises.

Main goal should be to create the best possible network that is on 24/7, that means dedicated hardware, fans, noise, light and so on will make that most people won’t have their equipment on 24/7. Create the best possible network first and then see what can be added in the future, anything else doesn’t make any good sense. Keep on dreaming but don’t let the dreams cloud reality.

1 Like

There is a plus side to as many nodes as possible and that is improved decentralisation in both geo locations and in control.

I’ve always held to that the device must be capable of helping the network and the bad node detection will shun any node on a device that is not capable. So its unnecessary now to always state the disclaimer that the device must be good enough.

But when more modern TVs often have more powerful computers than a RPi 4 then even those should be capable to run a node and use storage on the device/drive that the stored movies are stored on. My 10 year old TV, not good enough CPU I think, allows a USB drive to be attached or access my NAS, so I expect more modern TVs will run a node quite happily.

And of course the best targets are what @wes pointed out (routers, NAS and similar devices)


Is there a guarantee that they will not constitue a large %, that they will not cause instability to the network or other negative consequences. Have you been given that a lot of deep thoughts based on something concrete?

Can’t we just try and launch a great working network first and then try to see what can be added? It has been over a decade in the making, it will be a miracle the day when it launches. Can’t we just focus on that and make it the best miracle as possible? Save someting for the future, maybe in a decade Autonomi could run on a TV-box. fridge, lawn mover a personal Ai robot or anyhting. Let’s make the network aimed towards dedicated hardware at the beginning and make it the best possible network we can so as many as possible will adopt and start using the network. Wouldn’t that be fantastic?

I think by the nature of “it won’t run on it yet” is following what you’re saying.

We’re not focusing on those systems. It’s not on the radar yet. I (we?) are just looking towards the future and what can be built and ways to go about it.

Computers first, devices later.


Yes, and to write the apps for devices such as TVs to run nodes will be a longer term thing and Yes if the device is not good enough the network will shun nodes running on it. That is what the bad node detection does, detect nodes that cannot perform up to standard and be good enough for the network.

Also as time marches on these devices will only improve in capabilities. I doubt many TVs today can participate as they will be too old.

But the focus was not on these sort of devices but the other always on devices like NAS, routers. So fear not people are not going to spend valuable time writing apps for devices to run nodes that are not capable and to only have them shunned.

1 Like

The first few years when people adopt the network and start using it needs to be great, nodes running 24/7 on good hardware and good bandwidth connection. I hope we can focus on the goal, get the network up and running as great as it possibly can be, then possibly add things later. People who run nodes switching on and off their PC’s and so on gives potential to cause a less than great experience for users and also cause strain on the network. Aim needs to be dedicated node operators from the beginning making the network and the experience using the network solid. Just getting the network to run on dedicated hardware is still just a dream until it launches.

Running nodes on routers, NAS and other things, how will that effect those machines and their performance on the workload they were meant to be used for, that is a different interesting discussion.

I don’t think anything said here was saying otherwise. Now NAS can run docker and some here are working on that for their NAS. This will not change the focus since these are individual users.

I feel you are underestimating the length of time PCs will run for while on. No one is going to encourage people to run nodes if the person is turning on their PC and off again in an hour or so, then repeating that throughout the day/night.

The option for people to run their nodes for part of each day is built in to the network and also if you do not allow this then we end up with a similar situation that hits most “decentralised” systems and that is control is centralised too much. For instance the BTC decentralised network is centralised in a few thousand nodes. Autonomi would just have more operators but still relative small compared to the possibilities to make a truly global decentralised network. By allowing people to turn off their computers without penality will encourage a lot more to run nodes, some for part of days and some many days at a time and some more almost 24/7 but off for upgrades to OS and maybe while they are on holidays. But tell people they have to leave their computers on all the time and human nature will resist and they won’t run nodes at all.

So I think you also are overestimating the effect of turning off 20 nodes will cause the global network, the churn of 40 GB max over the world is like a small rock in a mining operation. And magnify this to one million in the time zone over the period of 2 hours is also a few TB maximum per 5 minute churn period around the world which is small. (one of our underwater cables to the US does that every second) Thankfully this is unlikely as everyone in a timezone is not turning off their computer over a 2 hour period. Remember turning off my PC in Australia is not causing a bottleneck in my town, but a series of packets transferred around the world from other parts of the world.

And then added to this is the inactive chunks held on every node. For the node that turns on for an hour the neighbours (in XOR space) that it got chunks from because it became the closest node to them will be keeping those chunks as inactive chunks. Then when that node that was on for an hour turns off the other neighbours will just activate those chunks again, no transfers needed for the most part.

Thus the shorter some one leaves their machine on the better the chance that there will be little actual chunks transferred during churning.

You must remember this is worldwide and this switching is on then long time later off. Saying switching on then off makes it sound like they are turning on for minutes then off again, rinse and repeat.

But nodes on routers and NAS will work fine. These also will increase the always on nodes in the global network. Imagine all those households with their home NAS who may not run nodes on their computers but now can run a few nodes 24/7. Millions upon millions of small NAS owners can run nodes.

  • routers will likely have one or two nodes. Their job is routing (funny that) which is sending/receiving/routing packets. And node’s major job is sending and receiving packets so it fits in well with a router’s hardware designed for that. The storage required will make it unsuitable for some routers but especially usable for routers that can have a USB drive attached, or use a network drive.

  • NAS will have no issues running one or two nodes and many can do more. And guess what their hardware is optimised for sending and receiving packets (no routing) and data storage. Again there will be some NAS devices unsuitable but most more modern NASes will eat it up. Docker images that some users are working on will make it easy as pie, and those NASes that run docker will certainly have the abilities. Remember Synology NASes come with a 2 camera license for recording s/w for 24/7 recording and do not load down the NAS.


I might probably have the option to get a second 10gbps symmetric ftth connection (8 guaranteed though) with a different provider (I have orange, 1gbps symmetric, they don’t offer 10gbps in my area yet), for only 25€. That would be with Digi and if you use some kind of referral codes, there’s a 5€ discount for 10 months (and subsequent discounts of 5x10€ for each person you might refer in a future).
As you see, it’s tempting and cheap enough to get an independent line just for the beta (and probably the mainnet after October), and do not block my personal connection, which I also use for work.

So now I would like to get some hardware (or at least consider options).
I have a raspberry pi 4 with a 1tb nvme (which is used through USB and probably not fully using its bandwidth), that I also use as pihole/Adguard, VPN (very little to no use) and minidlna (also very little to no use). Even if it’s pretty idle and I’m using it for 40 nodes of the last beta, that it seems to be able to handle (I see CPU spikes to 16 which means 100% but it’s usually around 6-8 (50%, and I don’t really get the number because it has 4 CPUs, but I guess it’ll be scaled to threads or something like that). Anyway nothing has exploded and I have good cooling and heat sinks).

As said, looking for something more appropriate and liberating my rpi4 for local home use and playing like in the past, I was considering these options:

  • RPi5, with SSD/nvme (probably get a cheap SSD for the rpi4 and reuse the 1tb nvme I have there). Probably 125€ with case, fans and sinks + whatever I spend on the disk. Let’s say under 200€. I foresee CPU bottlenecks here, same as with rpi4 and moreover network is capped at 1gbps
  • Intel NUC with i7: I’m seeing gen8 second hand ones for 300€, with 32gb RAM and 512 nvme. I think the bottleneck here would be the network. Is there any option to plug something to use more than 1gbps?
  • Look for other options. Something with 2,5g Ethernet? Something with 2 Nic’s and do bonding? Whatever option sounds too expensive
  • Combined option, recycling + buying something cheap: I also have a rpi3 doing nothing (it’s actually plugged but shutdown). So use rpi3 for “home purposes” (pihole + OpenVPN + minidlna), and move the rpi4 to the autonomi only connection (so I expect 40 to 50 nodes here), get the gen8 nuc, which due to the i7 I expect to be way more powerful and not have a bottleneck in CPU any more. From here, consider where (whether) to move forward.

I like the last option best. Is there any other hardware you’d consider? Yes, I could get proper servers (actually a friend offered me a couple for free) but they’re noisy and they would go to a guests’ bedroom, not that I have many visits, but a noisy fan kind of blocks it, that’s why I was thinking of the nuc and the raspberries all the time…

Sorry for the long post and the much thinking aloud.

Edit: nuc gen11 second hand, i7 and 2,5g Ethernet for 450€. Only 16 gb of ram and 256gb of disk (I would recycle my nvme anyway).

Yet another edit:
AMD Ryzen 7 5700u, same box size as the usual Intel NUC’s, 16gb ram, 512gb M2 SSD (not nvme), 2x2.5gb Ethernet, for roughly 400€, shipping included


I am literally sick with envy to hear you could get another 10Gb connection. I’d settle for being about to get 1 x 1Gb!

There are 2 topics where there has been much discussion of hardware if it’s not possible or sensible to run nodes on what people have already:-


I suggest moving the discussion to one of those and reading what people have previously posted. The ‘Low tech’ is largely about minimal setups. The ‘Best’ one is largely about the more extreme end of home setups.

But it sounds like you are considering the right things in terms of efficiency and what is practical / desirable in your home. Apparently we have to remember that it’s for other people to live in as well and not just running computers!


Can I move posts myself or do I need to ask a moderator? Can’t see the option, at least from mobile.

Btw, it would not be two 10gbps, rather 1gbps (Orange) + 10gbps (Digi).
When Orange activates 10gbps it will only be 5€ extra so I’m going for it for sure, but not possible yet.

We can do that for you. What do you want moved?

My previous long post to “best safe mode hardware”.
Thank you.

Having read your post again I see you are talking about getting a 10Gb connection as well as the 1Gb you already have - not ending up with 2 x 10Gb!

But anyway making use of this 10Gb connection will take some doing! It’s a nice problem to have but it will require some thinking about, planning, some config and not a little expense.

I’m sure they will be quoting the download bandwidth there because that is what most people are bothered about. But we need to know what the upload bandwidth is because safenodes do as much communication up to the network as down.

But let’s say that it is 1Gb/s down and 2.5Gb/s down because that would be typical.

Let’s look at some numbers of how many nodes you would be able to run and the requirements.

At the moment it looks like a node requires:-

1GB disk space
0.6MB/s up bandwidth
0.6B/s down bandwidth
~250 connections or sessions to the internet

But for all these except the disk space that are fixed allow a bit extra to keep things running well. So allow:

1.2GB disk space (space for logs and binaries as well)
1MB/s up bandwidth
1MB/s down bandwidth

So if this 10Gb line is 10/2.5 Asymmetric as I predict you’d be able to run 2,500 safenodes because the upload bandwidth will be the limiting factor.

That would require:-

~190GB RAM
~3000GB disk
~625,000 connections

But you need to find out what the upload bandwidth is. Maybe it’s higher than 2.5Gb/s and you can run more.

You will need more than 1 computer because the 1Gb interface on it will be the limiting factor. You won’t be able to run more than 1,000 safenodes with 1Gb.

I think you can run 50 nodes on a RPi4 4GB as long as you aren’t trying to do anything else with it so that would be 40 of them. That’s doable. Need a lot of network ports though on probably multiple switches.

or a smaller number of Intel Nucs. Or the HP small form factor machines that have been mentioned here.

Or you could get a couple of 2nd hand servers and fill them with RAM. They will be very noisy.

Easy. You’ll easily be able to have a SSD or spinning drive with enough capacity in whatever computers you use.

Connections or sessions to the internet
This is the worry. You’re going to need something better than a consumer Router that an ISP will provide. Even though this line you’re going to get has more bandwidth than most and they will supply a router that can handle that it may not be powerful enough to handle that many connections. It’s just not a common use case. A computer in typical use for home purposes might have 200 to 500 sessions open and a phone or tablet maybe half that. Then mulitply that but the number of users in a home or office and it might be a few thousand. Running safenodes will blow that out of the water. The ISP supplied router may not be able to cope. You could ask them how many connections it can handle but I doubt they’ll supply something that can handle hundreds of thousands.

Will the ISP even be able to handle or allow that many connections on this line? I think you should ask them if there is a restriction and if not what they can cope with.

It could be something like the MicroTik RB5009 that a couple of us are using and I’m not even sure it will be able to handle that many connections. A router like this is a challenge to setup unless you already work in networking. I kind of half do and it was a real stretch that I’ve learnt a lot from.

And obviously you’ll need to have a router with a 10Gb port on it.

Then you’ll need a 1Gb network switch or two for all these connections to connect the computers to. It will need to be ‘non-blocking’ so that the full 1Gb is available to every port. You will have to get into bonding of interfaces between the switch and the router because if you just connect a switch to the router it will only be able to use 1Gb.

I don’t want to put you off but these things are a challenge that will require some wrestling.

But after all this we also can’t ignore that running this number of safenodes at just 1 site might not be helpful to the network in the early stages. 2000 could be 10% of the network in the early days or weeks. That will be a lot of churn when - not if - something goes wrong with your network, power or computers or you just want to take them offline for maintenance. It could still be something like 1% for a few months.

There is also a suspected issue of nodes not being able to contact each other when they are at the same site which is what the discussion about has been about. If that isn’t addressed by the safenode code in some way or NAT loopback or ‘hairpin’ it could be a problem for the network. If you are just running a few nodes it won’t be a problem but a significant proportion of the network could be another matter.


So the line is symmetric. They say "only 8g guaranteed for connection, 2g reserved for “connectivity between equipment” (whatever that means, also take into account I’m translating from my language, but the Spanish term is not clear to me either).
I saw a review for this connection, and it says that the provider declares these speeds in the contract:


  • minimum: 5.000,32 Mbps
  • average: 7.970 Mbps


  • minimum: 6.000,12 Mbps
  • average: 7.710 Mbps

The technology used is XGS-PON.
The router provided is a Zyxel AX7501-B0.
It has a SPF+ port (that’d be where fiber gets in), 1 10g Ethernet port and 4 1g ones. It also has dual wifi with these specs, which I’m mentioning for a reason it’ll make sense later:

Band Gross speed
2,4 G 1.147,1Mb (WiFi ax + BW 40 MHz + 1024QAM + MIMO 4x4)
5 G 4.803,9Mb (WiFi ax + ancho 160 MHz + 1024QAM + MIMO 4x4)

Or in a picture:

So I found some good hardware to run on this. It’s a AMD Ryzen 9 6900HX, 8Cores/16Thread, 3,3hhz up to 4.9GHz, with GPU Radeon 680M.
64gb ram ddr5 4800. 1tb disk M2 nvme pci-e 4 (one slot extra free)
And here comes the good thing, 2 2,5g NIC’s!

640€ for that and I wouldn’t need to get a switch to begin with, because I could do bonding and get 3,5gbps (2,5 from the 10g port and 1 from any of the other 4 ports). Whenever I decide to grow, if I decide to, I’ve already checked switches (10g ones) but let’s leave that for the future, and this equipment could get up to 5gbps.
I don’t know if it’s possible/sensible to do bonding with the wireless as well, it has WiFi 6 and could add some bandwidth.
I can move the rpi4 (it has 8gb of ram btw, but limit is the CPU here so it doesn’t matter much) to this network as well.

Bad news, Digi does NOT give out the pppoe data for the 10g connection (they do give it for any other like the 1g one though). So I guess I wouldn’t be able to use my own router and limit would be the number of connections rather than the line speed itself.

So I’m willing to test this, and the price is affordable for me, and I can resell everything easily if I don’t want to keep it:
minipc + 10g cable: 640€. Easy to resell, and if not, I’m a geek and here’s my new toy
Second broadband: 25€ - 5€ discount for 10 months (and any time I refer someone, yet another discount). And they only make you stay for 3 months with them. So also easy to give up and cancel the contract.

I’ll check out the numbers again to see if it’s not too much overkill, but I think I should hurry up to test everything as soon as possible. I’ll see when/if/what I’m ordering…


I have a switch which is a budget 10g switch. It is 2x10g + 4x2.5g I have SBCs with 2.5g so this was perfect for connecting to a 5x10g switch and a 10g NAS and the SBCs

Using that you can split your 10g port into 1x10g computer connection and 4x2.5g computer connections allowing 5 high speed computer connections fully utilising the 10g on the router, assuming the 10g computer connection is not maxed out

In the end you will prob find it best to split the nodes over more than one computer for various reasons, including load. Then also those computers do not need to be so powerful.


copied over from discord general discussion…
background: twin Rpi4 8 GB setup, both currently with monitors. #1 is my dd, booting and running ubuntu 20.04 MATE from external 160 G 5400 rpm HDD (circa 2018,) #2 is booting/running raspbian buster from external (and ancient) 1 T HDD. #2 is the SAFE machine, at least through launch (i may add nodes to #1 later in keeping with the true spirit of the project.
so the questions are these: is it advantageous to run the OS from the SD with storage on external drive?

gparted shows partitions for boot and home, but does not show the 900+ GB partition with all my data (i mounted this to /home/media cuz at the time i thought i wanted separate partitions for easy flashing of OS without too much data movement) which i believe is formatted ext4…so would it be proper/useful to then break off another partition of, say, 500 GB for SAFE and what would be the appropriate format?

And now that fast little 128 or 256 GB SD seems wasted at this point, is it useful for running client/nodes (is that even possible with OS on external drive?)?

also, that ancient (2005 maybe) HDD i think has 1 or 2 Gbps transfer rate, is that an irreconcilable bottleneck?

@happybeing you say you have broadband through a 3G mobile network?? do tell, cuz my 500 Mbps from optimum here in the republic of texas is showing a whopping 19.8 dl and 20.8 ul. i’m eyeballing starlink with my impending exodus anyway, and it sounds like we’re getting decent speeds there, but i am ever wary of redundancy and want to add potentially a cell hat to my pi and i don’t think i see even 4G/LTE available there, i think i only found 2 and 3G.


4G not 3G. We have very good coverage of 4G in UK now, not everywhere but I am out in the country. 5G is happening but very patchy, mainly urban.

1 Like

likewise, only 4G even in the small city I am in the only place you find 5G is the train statin, downtown main st, and the main highway and main highway side mall, 5G is way too capital expensive to rollout to these 4G served areas (suburban and residential) because you need a tower every `900m given the signal radius is at best 500m from the tower for the top speeds deployed, what happens is the larger 5G signal is channelized and clock stepped down to 4G to reach those area the 5G signal will never reach. The population in these currently 4G areas is sparse single family or duplex housing, and a few 6 flat apartment buildings , so there is no way the Telecom operators can justify deploying 5G because the payback is not there, and there is now way they could ever compete with the fibre optic FTTC deployments and the modified cable TV with FTTC deployment to the home on a price/performance basis.

So if you get a headache driving on the highway, or sitting in the airport or train station, just think 5G, or shopping on main street or the big shopping mall , its what is frying you. Same goes for dense high rise living, you are getting fried, so lots of inflammation and health problems.

Worse yet, these profit hungry providers (when incented properly by third party advertisers) can crank up the power at night, and have been caught doing so in a number of countries, especially lately in the NL, deliberately trying to make you sick, boosting big pharma profits in the process. It’s racketeering pure, totally evil.

We would have better 5G coverage generally if the Yanks hadn’t shat themselves over Huawei…

1 Like