Node Manager UX and Issues

yes, not for that, I haven’t been port forwarding since switching to --home-network but your comment on errors had me thinking about going back.

1 Like

uPnP is the way to go now. Saves port forwarding, and worrying about firewall. For starlink I have to use home-network since it doesn’t allow port forwarding, and uPnp it seems too.

The other ISP is fine and still waiting on router to arrive in 2 weeks

unfamiliar with this, I enable uPnp and start with --upnp ?

@riddim, you were correct, this fixed it.

sudo systemctl disable --now safenodeX
sudo rm /etc/systemd/system/safenode*
sudo rm /etc/systemd/system/safenode*

Apparently. Enable it in the router and add the option

1 Like

I don’t think it works with node-manager yet but success nonetheless.

wyse2@wyse2:~$ nohup safenode --upnp &
[1] 458320
Node started

PeerId is 12D3KooWG1aSz7myqmPRpNYzna9UrT53ZSBD8WP45MifDqCt63qK
You can check your reward balance by running:
`safe wallet balance --peer-id=12D3KooWG1aSz7myqmPRpNYzna9UrT53ZSBD8WP45MifDqCt63qK`

And I have records.

wyse2@wyse2:~/.local/share/safe/node/12D3KooWG1aSz7myqmPRpNYzna9UrT53ZSBD8WP45MifDqCt63qK$ ls record_store | wc -l

so now I have nodes started in all three ways. nice


I think someone mentioned you can add the upnp flag to the startup command in the service file in /etc/systemd/system after adding (and before starting) it - this should theoretically work


I went through and added the flag for json logging to the service definitions after the fact to all the nodes no problem. I wouldn’t see why upnp would be any different? Daemon-reload and restart.

As it looks like you found out, this is from leftover service definitions. I’ve had this happen a few times trying to do things manually and not cleaning up properly. Having things there that node-manager isn’t expecting and won’t look for because it’s not in the registry.


changed it from --home-network to --upnp in the service definition - daemon-reload … node-manager stop and start … we’ll see if they continue running as usual and keep earning :slight_smile:


oh no …

when i had a look at the service it was still started with the --home-network flag …

… only when starting directly with systemctl it started correctly … ( systemctl start safenode1 ) … and now i’m seeing the automatically added port-forwarding in the router settings too …

…i don’t really understand why this would be the case … but this is what i saw here just now … (anyway … but nodes running with upnp now i think)


I did the same change, all nodes still earning. Some sed magic in the systemd directory to the safenode* units did all the magic, with the later restart of services in safenode-manager.


This is a bug that was also seen at various points by @Southside .

It looks like there are additional characters being written out to existing service files (“multi-user.targetget”–additional “get” there). I’m not sure if this is a bug in the underlying service manager. Only problem is, I’ve never reproduced it myself.


K, your original reset instructions fixed it.

Will node-manager officially support upnp anytime soon do you think?
Without the need to mod anything.

1 Like

Yeah, I have a branch in flight right now. Hoping to PR it today.


I can only say I have never seen this sort of error on starting any other service.

What part of the code assembles the “" line?

It’s the service manager crate that writes the service file.

1 Like

It seems to have missed yesterdays release by the skin of its teeth, I tried to build latest but the build failed, so I grabbed your branch and am now rocking safenode-manager with --upnp :clap:


Should node-launchpad issues go here or is a separate thread a better idea?

Anyhow… pulled and built the latest with --all-features cos I’m lazy - tell me if Im wrong and should use different build parameters.

This is what I got

willie@gagarin:~/projects/maidsafe/safe_network$ ./target/release/node-launchpad  -V
node-launchpad 0.1.3-e52a1dda (2024-05-16)

Authors: MaidSafe Developers <>

Data directory: /home/willie/.local/share/safe/launchpad
willie@gagarin:~/projects/maidsafe/safe_network$ ./target/release/node-launchpad  
node-launchpad failed:
   0: Failed to parse app data

Any hints?

Well apart from running it with sudo… that works, but I thought that the necessity for sudo was removed???

So I see an improved version of the TUI from last weel, pressed Ctrl+g to start nodes - allocated 40GB and I am informed 8 nodes will be started.
7 out of 8 strat OK, I highlight no 7 which is stall at “ADDED” and hit Ctrl+g to start - which killed the TUI and gave me this

willie@gagarin:~/projects/maidsafe/safe_network$ sudo !!
sudo ./target/release/node-launchpad  
[sudo] password for willie: 
node-launchpad failed:
   0: missing field `user_mode` at line 1 column 3747
willie@gagarin:~/projects/maidsafe/safe_network$ sudo ./target/release/node-launchpad  
node-launchpad failed:
   0: missing field `user_mode` at line 1 column 3747

Trying to restart the TUI failed with the same message so I guess I will need to clean up the running nodes and start again.

Started again and allocated 80GB == 16nodes

Highlighting any of the nodes that failed to start and doing Ctrl+g bombs me out of the TUI and gives me a similar error as above - different line no but I started 16 nodes this time , not 8.


I think a separate thread on the TUI would be better. There might be a lot to discuss on TUI UX, which is quite a different discussion to the node manager CLI. I also haven’t worked on it yet myself.


@chriso idk if it is by design but the version I built from your branch requires sudo for status.

wyse1@wyse1:~$ safenode-manager status
wyse1@wyse1:~$ sudo safenode-manager status
[sudo] password for wyse1: 
║   Safenode Services   ║
Refreshing the node registry...
Service Name       Peer ID                                              Status  Connected Peers
safenode1          12D3KooWNd8GWy5dgHYqimRb9WCJSt1EBXfVFFu2KxSMdUqQPhFa RUNNING               8
safenode2          12D3KooWG48XGSUA9WvPkonccwp2qtg3pBnke2S1uCuAwbvAvUJ3 RUNNING              40
safenode3          12D3KooWJr5FqeCKRtDAgSDxk1KqWhpMVx2ScudNjuhxBmUYrRU9 RUNNING               9
safenode4          12D3KooWDfnPrzqSwA6KJRaPS2MuHn4Ru4p3dqf1sKVoeNJAMBaf RUNNING             414
safenode5          12D3KooWA569g3RovnRQ8zW5drfBiemufkhM1G7pG3mXC3DsXrzn RUNNING               7
1 Like

Hmm, I think this has been an unintentional side effect of separating the local network and the service-based status. So the status command now only returns the status of service nodes, and local status returns the status of any local networks.

Thanks for reporting it. I’ll note it as a bug.


Even after reverting to 0.7.5 I still need sudo.
Do you have any wise advice?

1 Like