You can remove a single service by giving the service name “safenode17” to safenode-manager
. Use --help
to check the command but it is quite simple.
Then add and start again, but just one service.
You can remove a single service by giving the service name “safenode17” to safenode-manager
. Use --help
to check the command but it is quite simple.
Then add and start again, but just one service.
Sorry I just used that as an example. Every time I reset all of them and re-add, its the same error now.
Manually removing the .service files then wont allow me to use safenode-manager reset
even with the -f
command. maybe force should, continue the process no matter what.
Do you have any idea what led to that state of affairs? I’ve never seen that error before.
I will spend some time trying to make the reset
command more robust. As a general rule though, I would recommend as much as you can, try not interfering with the things the node manager is managing.
Ive just been running the same commands generally. This time I had a new one.
Jun 24 14:44:51 wasabi safenode[22408]: Error:
Jun 24 14:44:51 wasabi safenode[22408]: 0: Node terminated due to: "HardDiskWriteError"
Jun 24 14:44:51 wasabi safenode[22408]: Location:
Jun 24 14:44:51 wasabi safenode[22408]: sn_node/src/bin/safenode/main.rs:385```
are you in the discord? Ping me there, I had a similar issue on 2 nodes, was an easy fix
For the benefit of other users, best to post here.
Understood, but this is a guide, not a support channel. Discord is way faster. Also these could be unique issues that wont apply to everyone.
In your case, try the following…
Reboot your machine so all nodes stop running
Then
cd ~/.local/share/safe/
Then
ls
Confirm you see the two directories listed below
node safenode-manager
If so…do the following
rm -r node/
That will manually blast out the node info, and let you readd your nodes fresh. I had the same issue on one of my instances, no matter what I did it wouldnt work.
Then run add command to add your nodes again
Based on that error, it looks like it’s possible that your disk ran out of space.
Which will generally not be good if that’s on the root partition. You wouldn’t have been permitted to add other services, and that could have resulted in the weird things you saw with the node manager.
This is a really helpful guide, thanks for putting it together! I am running a VPS with Ubuntu 24.04, and I ran into one small issue so I thought I’d share the error / fix here. Following the guide exactly as written, I made it to safenode-manager start --interval 90000 --service-name safenode1
at which point I began encountering an error:
Attempting to start safenode1...
Failed to start 1 service(s):
✕ safenode1: Failed to connect to bus: Permission denied
Error:
0: Failed to start one or more services
Location:
sn_node_manager/src/cmd/node.rs:759
As it turns out the fix for this was to ensure the safenode services are configured to run as root, not the newly created user (at least in my case, that was the fix.) When switching some things around so that the add
command either runs as root, or runs with the --user root
option, the nodes came online as expected.
This guide saved me a lot of time on initial troubleshooting, thank you again for putting it together!