Note….commands have # in font to avoid running, when using do not copy those into CLI, also note…remove < > these are just placeholders. I used port 45000 to 45100…you can change this to whatever works for you. Highly recommend you run this in Screen
1) Login as Root, create new user
2) Create a new user, add to SU group - remove < >
Add new user
#sudo adduser <newuser>
Add to Sudo group
#sudo usermod -aG sudo <newuser>
Log Out as ROOT, and Switch to New User you just created from here on forward
3) System Maintenance/Firewall Setup - Note Command A and B, you do not run both. Main diff is if you need VDASH.
#sudo apt update && sudo apt dist-upgrade -y
Reboot After Update Completes
#sudo reboot
Log Back In Run The Follow To Clean Up From Update
#sudo apt-get update -y && sudo apt-get --with-new-pkgs upgrade -y && sudo apt autoremove -y
Choose A or B Depending On Your Set Up
A) #sudo apt install -y unzip fail2ban && sudo apt install ufw -y && sudo apt install screen -y
- No vdash, run this if you dont plan to run VDASH
B) #sudo apt install -y unzip fail2ban && sudo apt install ufw -y && sudo apt install screen -y && sudo apt install build-essential -y
- With VDASH install planned… Note build-essential is option, only install it If you plan to use vdash!
#sudo ufw default deny incoming && sudo ufw default allow outgoing && sudo ufw allow ssh && sudo ufw allow 45000:45100/udp
#sudo ufw enable
4) Fail2Ban set up file. Run the command below to open a file in Nano, then paste the config below. Once Pasted, Control+X to close the file, Y to save it. Then reboot.
#sudo nano /etc/fail2ban/jail.local
Fail2Ban Config
[sshd]
enabled = true
port = 22
logpath = /var/log/auth.log
maxretry = 3
Reboot After closing Fail2Ban config
#sudo reboot
5) Install Safeup
#curl -sSL https://raw.githubusercontent.com/maidsafe/safeup/main/install.sh | bash
Set env
#source ~/.config/safe/env
Install Node Manger
#safeup node-manager
6) Add — Adjust remove " " and change parameters of owner, count, and node-port
#safenode-manager add --owner "yourdiscordID" --count "your-max-nodes" --node-port "portrange-portrange"
7) Staggered Start-Up…I did 60 nodes in batches of 15. This worked with less failed start ups, takes long, but works. Adjust as needed. Prior to start up, make sure you run commands using Screen, this includes after a reboot. AUTO is the name of the screen I chose, can be changed to your liking
Create Session
#screen -S AUTO
ReAttach To Existing Session
#screen -r AUTO
Check For Existing Sessions
#screen -ls
Start All with 1 Command
#safenode-manager start --interval 90000
–Adjust Interval as Needed
Start In Batches
Batch 1
#safenode-manager start --interval 90000 --service-name safenode1 && safenode-manager start --interval 90000 --service-name safenode2 && safenode-manager start --interval 90000 --service-name safenode3 && safenode-manager start --interval 90000 --service-name safenode4 && safenode-manager start --interval 90000 --service-name safenode5 && safenode-manager start --interval 90000 --service-name safenode6 && safenode-manager start --interval 90000 --service-name safenode7 && safenode-manager start --interval 90000 --service-name safenode8 && safenode-manager start --interval 90000 --service-name safenode9 && safenode-manager start --interval 90000 --service-name safenode10 && safenode-manager start --interval 90000 --service-name safenode11 && safenode-manager start --interval 90000 --service-name safenode12 && safenode-manager start --interval 90000 --service-name safenode13 && safenode-manager start --interval 90000 --service-name safenode14 && safenode-manager start --interval 90000 --service-name safenode15
Batch 2
#safenode-manager start --interval 120000 --service-name safenode16 && safenode-manager start --interval 120000 --service-name safenode17 && safenode-manager start --interval 120000 --service-name safenode18 && safenode-manager start --interval 120000 --service-name safenode19 && safenode-manager start --interval 120000 --service-name safenode20 && safenode-manager start --interval 120000 --service-name safenode21 && safenode-manager start --interval 120000 --service-name safenode22 && safenode-manager start --interval 120000 --service-name safenode23 && safenode-manager start --interval 120000 --service-name safenode24 && safenode-manager start --interval 120000 --service-name safenode25 && safenode-manager start --interval 120000 --service-name safenode26 && safenode-manager start --interval 120000 --service-name safenode27 && safenode-manager start --interval 120000 --service-name safenode28 && safenode-manager start --interval 120000 --service-name safenode29 && safenode-manager start --interval 120000 --service-name safenode30
Batch 3
#safenode-manager start --interval 240000 --service-name safenode31 && safenode-manager start --interval 240000 --service-name safenode32 && safenode-manager start --interval 240000 --service-name safenode33 && safenode-manager start --interval 240000 --service-name safenode34 && safenode-manager start --interval 240000 --service-name safenode35 && safenode-manager start --interval 240000 --service-name safenode36 && safenode-manager start --interval 240000 --service-name safenode37 && safenode-manager start --interval 240000 --service-name safenode38 && safenode-manager start --interval 240000 --service-name safenode39 && safenode-manager start --interval 240000 --service-name safenode40 && safenode-manager start --interval 240000 --service-name safenode41 && safenode-manager start --interval 240000 --service-name safenode42 && safenode-manager start --interval 240000 --service-name safenode43 && safenode-manager start --interval 240000 --service-name safenode44 && safenode-manager start --interval 240000 --service-name safenode45
Batch 4
#safenode-manager start --interval 360000 --service-name safenode46 && safenode-manager start --interval 360000 --service-name safenode47 && safenode-manager start --interval 360000 --service-name safenode48 && safenode-manager start --interval 360000 --service-name safenode49 && safenode-manager start --interval 360000 --service-name safenode50 && safenode-manager start --interval 360000 --service-name safenode51 && safenode-manager start --interval 360000 --service-name safenode52 && safenode-manager start --interval 360000 --service-name safenode53 && safenode-manager start --interval 360000 --service-name safenode54 && safenode-manager start --interval 360000 --service-name safenode55 && safenode-manager start --interval 360000 --service-name safenode56 && safenode-manager start --interval 360000 --service-name safenode57 && safenode-manager start --interval 360000 --service-name safenode58 && safenode-manager start --interval 360000 --service-name safenode59 && safenode-manager start --interval 360000 --service-name safenode60
8) Optional VDASH -
Install VDASH to view nodes - step 1 is install Rust
#curl https://sh.rustup.rs -sSf | sh
Once Rust is installed, set the env, copy as is below
#. "$HOME/.cargo/env"
Install VDASH
#cargo install vdash
Load VDASH Viewer
#vdash ~/.local/share/safe/node/*/logs/safenode.log
Common Commands
Status With Details
#safenode-manager status --details
Status - Shows Peers Only
#safenode-manager status
Stops All Nodes
#safenode-manager stop
Stop Specific Nodes - Replace XX with Node Number
#safenode-manager stop --service-name safenodeXX
Start Specific Nodes - Replace XX with Node Number
#safenode-manager start --service-name safenodeXX
Reset All Node - If Needed - Remember to reboot after a reset
#safenode-manager reset