Although you can create the nodes in safenode-manager, leave them stopped and add the ‘–upnp’ flag to the service definitions, then start them up in safenode-manager.
OK, but because I haven’t used the manager once, I’ll rather wait until the next release. I’m such a noob that everything CLI is more or less a pain. Copy-paste I can manage, but nothing even remotely creative.
Just upgrade the node manager it’s in the add command already -upnp
- Stop running nodes within a port range.
Rocky Linux 9.4, bash
MINPORT=30900
MAXPORT=30950
for i in $(ls ~/.local/share/safe/node/)
do
iPID=$(lsof ~/.local/share/safe/node/$i/logs/safenode.log | grep "safenode " | grep -o "[0-9]*" | head -1)
iPORT=$(netstat -lnup 2> /dev/null | grep "$iPID/safenode" | grep -o "[0-9]*" | head -7 | tail -n1)
if [[ $iPORT -le $MAXPORT && $iPORT -ge $MINPORT ]]; then
echo "stopping node peer-id $i at port $iPORT"
echo "$iPORT" > ~/.local/share/safe/node/$i/PORT
kill $iPID
else
echo "skipping pid $iPID on port $iPORT (not in range or not running)"
fi
done
- Get path to log file by PORT number.
Rocky Linux 9.4, bash
Works great with vdash, see commented-out line.
PORTNUMBER=30910
for i in $(ls ~/.local/share/safe/node/)
do
iPORTLOG=$(cat ~/.local/share/safe/node/$i/PORT) || iPORTRUN=$(netstat -lnup 2> /dev/null | grep "$iPID/safenode" | grep -o "[0-9]*" | head -7 | tail -n1)
echo -n "."
if [[ $iPORTRUN -eq $PORTNUMBER || $iPORTLOG -eq $PORTNUMBER ]]; then
RESULT="$HOME/.local/share/safe/node/$i/logs/safenode.log"
echo ; echo "$RESULT"
kill -INT $$ #break
fi
done
#vdash $(realpath $RESULT)
- Wait for CPU% to decrease below desired threshold (CPUMAX)
Extracted from original script. Could replace a fixed durationsleep
.
CPUMAX=90.0 # Required cpu % or lower to exit monitoring loop
cpu=100
while [ 1 -eq "$(echo "$cpu > $CPUMAX" | bc )" ]
do
cpu=$(awk '{u=$2+$4; t=$2+$4+$5; if (NR==1){u1=u; t1=t;} else print ($2+$4-u1) * 100 / (t-t1) ; }' <(grep 'cpu ' /proc/stat) <(sleep 1;grep 'cpu ' /proc/stat))
echo -n "."
done
echo "reached $cpu %"
it was in as a safety valve I was experiencing high cpu on vps’s with them ending up staying in the 90% range.
with this script on a 5 min cron job it was making things a lot worse as it run’s
safenode-manager status --details
which refreshes the node registry consuming a lot of cpu and takes a long time to complete.
also on a few occasions I think the first run of the status command had not completed when the next one started which messed the node registry up.
I have now moved to a 15 min monitoring cycle and juggled my systems so am hopeful i do not see the same high cpu usage again
@drirmbda These look very useful indeed.
Could possibly use whiptail to ask for port nos, acceptable CPU etc etc. Even fix the paths if we are using /var/safenode-manager or ~/.local/share/safe/node …
Could you put them up on safenetforum-community · GitHub and I’ll make a PR?
If you started the nodes with a known RPC port then you could use grpcurl to grab the status off one node at a time reducing the CPU load considerably. The high cpu usage is not so much the nodemanager but the nodes it grabs the status from using RPC calls. When I do a grpcurl RPC status call from a node you can see the spike in CPU usage, now imagine doing it for 20 nodes all at once.
nodemanger allows you to specify the RPC port (range if more than one node added) which it passes to the node s/w, or you can run the node yourself giving it the RPC port.
Yeah, I agree, little cut-and-paste code snippets collected here could be wrapped into functions, or turned into a small collection of command-line tools, etc.
Feel free to take some and place them somewhere on safenetforum-community, in an existing repo or a new one. I could then add to it. That may be faster, because I don’t really know yet where to start, how things work there, etc.
All in here then
I was thinking about doing it that way and going back to running nodes directly like we used to do.
reasons iv got for not going that route is that node manager is in development and feed back is useful for the team and finding any bugs or ideas for additions.
also the NTracking dashboard is already set up to use the node manager so when new comers start arriving if they would like the dash board it will make helping them set it up a whole lot easier if I can point them to node-manager for there node running needs.
you gave me an idea thanks @neo
@chriso you know how much I love my --intervals would it be possible to have an --interval flag on the safenode-manager status
command so i could even out the cpu spikes ?
I’m extracting the info via ‘systemctl status safenodeX’ and then regexing the port info out of the response string… Brought down load from the monitoring a lot…
I find it easier to just know the port number when starting the nodes by setting them to what I want. And it being so easy when starting the nodes I just do it that way
And with more than one device its almost essential since another device might be using a port number that this device chooses for the port to listen on
This would be an interval between determining the status of each service? I think that should be fine, yeah.
Yes just like you said
Added it to the list:
Suggestion: Sweep all node wallet balances to a specified wallet.
I think this would be possible if the script moved the client wallet out of the way, and then moved each node wallet into place while creating the send and saving the output to a file, and then moving the main wallet back into place.
Finally it does the receive for each transaction saved in the file earlier.
-
Feature: Sweeping nanos
I mentioned here that it would be nice ifsafe wallet
would support--peer-id
to point to a node wallet just likesafe wallet balance
does.
For the time being, moving a node’s wallet directory to the client directory is not working yet. Using safe wallet balance I got thisI/O error: File exists (os error 17).
Is some tmp file interfering? -
Feature: Reading node status
Similarly, it would be nice ifsafenode --status
would report node status.
For the time being I am considering @neo’s suggestion to use grpcurl.
I have started a node including--rpc <IP>:<RPCport>
and now am trying to figure out the last step. Something like:grpcurl -plaintext -proto safenode.proto <IP>:<RPCport> SafeNode/NodeInfo
ErrorsError invoking method "SafeNode/NodeInfo": target server does not expose service "SafeNode"
- Kill and restart node
Rocky Linux 9.4, bash
This is useful for upgrading nodes (after upgrading the binaries) or changing certain options, such as --owner, etc (rpc, ports, logging level,…).
DISCORD_ID="readable_discord_user_name"
MINPORT=33300
MAXPORT=33309
for i in $(ls ~/.local/share/safe/node/)
do
iPORT=$(cat ~/.local/share/safe/node/$i/PORT)
if [[ $iPORT -le $MAXPORT && $iPORT -ge $MINPORT ]]; then
echo ; echo "Replacing node with peer-id $i at port $iPORT..."
iPID=$(lsof ~/.local/share/safe/node/$i/logs/safenode.log | grep "safenode " | grep -o "[0-9]*" | head -1)
kill $iPID
sleep 1
echo "Starting replacement node with peer-id $i at port $iPORT..."
screen -LdmS "safenode$iPORT" safenode --owner $DISCORD_ID --port $iPORT --root-dir ~/.local/share/safe/node/$i --log-output-dest ~/.local/share/safe/node/$i/logs --max_log_files=9 --max_archived_log_files=0
iPID=$(lsof ~/.local/share/safe/node/$i/logs/safenode.log | grep "safenode " | grep -o "[0-9]*" | head -1)
echo "$iPID" > ~/.local/share/safe/node/$i/PID
sleep 1
fi
done