Yeah, it is a bit tidier, and I did consider it. I’m pretty sure the reason I didn’t go with it was because the service manager crate didn’t support it. So I may revisit it, but it wouldn’t be a high priority.
Was only a suggestion as my display has them all over the place as 1 is equal to 10 and would be equal to 100 if that makes sense.
As @chriso knows I’ll suggest if it’s not inline with what is planned I’m not going to be to worried about it
To be honest, it’s something we could add in as like a user-controlled setting. I know where you’re coming from–I can get a bit OCD about that kind of thing myself .
My wife hates my OCD but things need to be in order or I lose the plot
I’m not sure where to put this report, so it’s going here.
I went to add something to my path in .bashrc and saw this:
source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
export PATH=/home/mav/.local/bin:/home/mav/.local/bin:/home/mav/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/us>export PATH=/home/mav/.local/bin:/home/mav/.local/bin:/home/mav/.cargo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/us>source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
export PATH=/home/mav/.local/bin:/home/mav/.local/bin:/home/mav/.local/bin:/home/mav/.cargo/bin:/usr/local/sbin:/usr/lo>source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
source /home/mav/.config/safe/env
I’m not sure what keeps adding it, but something to look into.
Hmm… It’s possible? From the “install and start nodes” section of the maid.sh script
curl -sSL https://raw.githubusercontent.com/maidsafe/safeup/main/install.sh | bash
source ~/.config/safe/env
I’m a bash script dunce. Would this actually write it to .bashrc?

Would this actually write it to .bashrc?
Yep - its one of the things I promised @aatonnomicc I was going to sort - and then forgot all about it - sorry @aatonnomicc
Brave’s Leo would sort it like this
#!/bin/bash
# Initialize a variable to store the number of occurrences
occurrences=0
# Use sed to remove the line from .bashrc, except the first occurrence
sed -i '/source \/home\/'"$USER"'\/.config\/safe\/env/!d' ~/.bashrc
# Use grep to search for the line in .bashrc
occurrences=$(grep -c "source /home/$USER/.config/safe/env" ~/.bashrc)
# Check if the number of occurrences is exactly 1
if [ $occurrences -eq 1 ]; then
echo "The line 'source /home/$USER/.config/safe/env' appears once and once only in .bashrc"
else
# If not, print a success message
echo "The line 'source /home/$USER/.config/safe/env' has been removed from .bashrc, except the first occurrence"
fi
This is 100% untested and unreviewed code
Need to tidy up the PATH insertion as well
export PATH=/home/willie/.local/share/pnpm:/home/willie/.cargo/bin:/home/willie/.local/bin:/home/willie/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/home/willie/.safe/env:/home/willie/.local/share/ntracking/:/home/willie/.local/share/ntracking/:/home/willie/.local/share/ntracking/
Now I’m scared to look in my bashrc
@chriso could you please shed some light on how node-manager finds the number of connected peers.
This morning node-manager tells me that both of my nodes have 7 connected peers.
PS C:\Windows\system32> safenode-manager status --details
=================================================
Safenode Services
=================================================
Refreshing the node registry...
============================
safenode1 - ←[32mRUNNING←[0m
============================
Version: 0.105.6-alpha.4
Peer ID: 12D3KooWJ9uRrcufRntsyzjHqQFMvw7KbD6BKfmdpkRyYt8TCX3j
RPC Socket: 127.0.0.1:51147
Listen Addresses: Some(["/ip4/192.168.254.166/udp/13001/quic-v1/p2p/12D3KooWJ9uRrcufRntsyzjHqQFMvw7KbD6BKfmdpkRyYt8TCX3j", "/ip4/127.0.0.1/udp/13001/quic-v1/p2p/12D3KooWJ9uRrcufRntsyzjHqQFMvw7KbD6BKfmdpkRyYt8TCX3j"])
PID: 10224
Data path: C:\ProgramData\safenode\data\safenode1
Log path: C:\ProgramData\safenode\logs\safenode1
Bin path: C:\ProgramData\safenode\data\safenode1\safenode.exe
Connected peers: 7
============================
safenode2 - ←[32mRUNNING←[0m
============================
Version: 0.105.6-alpha.4
Peer ID: 12D3KooWQT7TbefWL9yJL6WDDbh2XAmE2phNKr7piCrgkAet3T6R
RPC Socket: 127.0.0.1:51848
Listen Addresses: Some(["/ip4/192.168.254.166/udp/13003/quic-v1/p2p/12D3KooWQT7TbefWL9yJL6WDDbh2XAmE2phNKr7piCrgkAet3T6R", "/ip4/127.0.0.1/udp/13003/quic-v1/p2p/12D3KooWQT7TbefWL9yJL6WDDbh2XAmE2phNKr7piCrgkAet3T6R"])
PID: 5988
Data path: C:\ProgramData\safenode\data\safenode2
Log path: C:\ProgramData\safenode\logs\safenode2
Bin path: C:\ProgramData\safenode\data\safenode2\safenode.exe
Connected peers: 7
But if I look at my logs, for both nodes:
PeersInRoutingTable
& connected peers
climb steadily and in unison up until my most recent entries as seen below.
safenode1:
71951: [2024-04-07T05:19:57.930024Z INFO sn_node::log_markers] PeersInRoutingTable(153)
71947: [2024-04-07T05:19:57.929773Z INFO sn_networking::event] New peer added to routing table: PeerId("12D3KooWDjed9h85zpmW7GM1v4RQos7DeqMh523SgQC57dDxFPeX"), now we have #153 connected peers
safenode2:
33509: [2024-04-07T01:59:48.866730Z INFO sn_node::log_markers] PeersInRoutingTable(158)
33505: [2024-04-07T01:59:48.865968Z INFO sn_networking::event] New peer added to routing table: PeerId("12D3KooWQV43Ao3DiucRSbqeQw1g4GGq8isyjF2pAFG8yLmVSCh5"), now we have #158 connected peers
It is not clear to me where the manger is getting 7 connected peers from?
And then for some anecdotal observation.
On Windows both of my nodes as a process have rewards and neither of my nodes as a service have rewards.
On Linux all my nodes as a service have rewards.
I have not yet earned a single reward on any node as a service on windows.
Call me a nutter but it is what I see
@happybeing I guess this may be of interest to you too, as you use these for vdash I believe. Perhaps you could even share some insight into something that I am not seeing.
The node manager gets the connected peers by using an RPC ‘network info’ call. So the value is whatever is returned by that command.
What is your thoughts on this as it is causing confusion, are the logs of any use?
Are they incorrect? how is it that my logs tell me 153 connected peers and RPC tells me 7.
Sorry I guess you are off, answers can wait until you are back!
Sorry, if there’s a discrepancy between the two, I think I’d need to call on @qi_ma or @joshuef to shed some light on that.

Sorry I guess you are off, answers can wait until you are back!
That’s OK. This is definitely an interesting question that should be answered. It’s just that accounting for this discrepancy is not immediately obvious. It may actually touch upon something David said in his post here, that logs might not be the best place to determine metrics. Although, it could be that the logs are correct and the information returned in the RPC call are wrong. We’d need to investigate further.

Connected peers: 7

The node manager gets the connected peers by using an RPC ‘network info’ call
This shows the connected peers
reported from the libp2p

now we have #153 connected peers

PeersInRoutingTable(153)
Both these two are printout from an inner counter by us, which counting the connected & disconnected peers.
Which, it only counts a peer’s join or leaving activity and being added into or removed from kad::KBucketsTable (RoutingTable)
It has to be noted that:
1, a peer in RT doesn’t mean it is connected to us at the moment
. If there is no traffic, the connection could be disconnected. i.e. it is in our RT, but not counted as connected
2, a peer that connected to us, doesn’t necessary mean it is in our RT. i.e. if that bucket is full, it can still connect to us, but not appear in our RT
Hence, the number of the connected peers reported from the libp2p
and number of peers in RT, or connected peers counter
could be totally different.
If there is some traffic heavy activity, the connected peers reported from the libp2p
could be much higher. Meanwhile, if the traffic is quiet, it could be much lower.
However, a lower number of 7, does seems suspicious, and may need to carry out some further investigate on it.
Thanks for the explanation Qi! It sounds like the RPC call is more informative regarding the peers actually connected at that moment in time? Or is a peer who is connected but not in the routing table supposed to be regarded differently?
You may be interested in some other reports about connected peers. The value returned by the RPC call seems to fluctuate a lot, even when each call is placed between small intervals of only a few seconds.

the RPC call is more informative regarding the peers actually connected at that moment in time?
Yes.
However, I think normally, the number of peers in the RT
will be more interested by a user ?

is a peer who is connected but not in the routing table supposed to be regarded differently?
No
Maybe, the RPC call shall return with both of the info?
i.e. peers in the RT (or even the kBucketsTable as normally logged) + connected peers from libp2p perspective
Right OK. I can look at extending the RPC to including the other information. As to what information would be more relevant to the user, I’d go with your judgement on that. Are we expecting the number in the routing table to be more stable without fluctuation?

Are we expecting the number in the routing table to be more stable without fluctuation?
Yes.
Shall be quite stable.
If it fluctrate a lot, may indicates some problem

indicates some problem
I have seen very rapid continuous fluctuations.
Not clear to me of that is a network problem, bad node detection problem or my nodes problem.