I’m a little worried it’s going to just roll through them all and bang them all in. I’d like to be able to control it. Do one, then 3-4, then as they stabilize keep it going.
There is an rpc command to upgrade the node, so it would be easy to script a solution to do this.
Amazing experience thus far, thank you to everyone who made this happen (I’m loving vdash as well).
Last testnet I was able to collect records and nanos but this time I’m collecting nanos but no records. This is a droplet which is seeing constant CPU usage, the inactive nodes were started but without ‘& disown’ but the active ones are still stacking puts/gets.
As @happybeing says, this testnet is perhaps winding down. @web was getting puts and gets but no records or earnings last I heard - any updates there? Getting earnings with no records does indeed seem paradoxical, I didn’t see anyone else with that happening.
You’ve given me a little clue here as to how I can better manage my nodes in the future. Reading a Baeldung tutorial on Linux Job Control here now. Cheers.
Ahhh thanks, I hadn’t seen that. So Grafana is where these fancy-pants graphics are coming from. Very cool. Will investigate!
EDIT: last knowledge-get of the day: sn-node-manager is where people go for all their smoothly removing nodes, checking the status of everything, etc. Finally, the penny dropped. A great day all round
ls $HOME/.local/share/safe/node/ | while read f; do echo ${f} ; ls $HOME/.local/share/safe/node/${f}/record_store | wc -l ; echo ; done
Edit
Just realised you are talking about something different: how to get a count of records you are definitely responsible for - not the total of records you are storing.
Ahhh danigt, apologies @happybeing ! I did see a comment here and had reworked a PR to avoid removing it, but might have lost that change in amongst some rebasing I think we should set up LogMarkers for everything we want out, and that would warn us of unused variants there I think. I’ll try and set that up to get this back in.
Has anyone created a Helm chart for deploying? I should be getting my Pi 5 POE hats in the next couple days and was going to setup a 5 node Kubernetes cluster.
I just wanted to see if someone has done that work so I could save some time.
That is a very good idea! It would make the network growth a bit more realistic.
For this network - and others before it - I think the issue we have is that just about all the people who will join in have done and with all the resources they are able to sling at it. Yes, we’re getting a couple of people coming in a few days late with a few nodes but not enough to move the needle now it’s very full.
I don’t think that even removing the directories like that will kill the processes. If you want to kill all the safenode processes you can just do
killall safenode
But I don’t that is what you’re after either. I assum you are wanting to kill only the ones that are unresponsive. That is a bit more involved but can be done like this using ‘lsof’ which lists open files. The safenode process will have the log file open.
1.Work out which of the safenodes is not working right by looking at the logs in:-
So far so good. Most feature rich one in many years, and probably the most stable to date. Some more tweaks in the works and we’ll see if they’re backwards compatible or can just upgrade.