Can I ask one question with 2 parts that noone has answered, all I get is crickets.
- Does the upgraded node keep the old node’s PeerID?
- Does the upgraded node get a new XOR address or does it somehow set its XOR address to what the old node was?
Can I ask one question with 2 parts that noone has answered, all I get is crickets.
Now the standard is 1 node 5 gb, but how do I allocate 1 tb to 1 node? Because my internet connection starts as soon as I try to do something like 200 nodes (would like to give as much storage to the Network).
Can I also compress these two commands into one?
sudo /root/.local/bin/safenode-manager add --count 2 --owner eddde --home-network
sudo /root/.local/bin/safenode-manager start --interval 20000
sudo /root/.local/bin/safenode-manager add --count 2 --owner eddde start --interval 20000
Has someone tried this already?
You cannot.
All nodes are of a fixed size across the whole network. To have otherwise is to break the simple close node design. A lot of work would be needed to have any size you want.
And the safenode-manager cannot combine commands, it is what executes the commands and doesn’t have a command to add and start.
If you want that just run the launcher
FYI there is no need to use sudo any more if you used safeup to install it as a user and not root
To answer the first question: yes. If the data for the node is not cleared, the peer ID is retained on any restart, including an upgrade. Honestly, I don’t know the answer to the second question. I will try to find out for you.
We need to be careful about this advice. There is an important distinction between using the node manager using elevated privileges or not. You should use sudo
if you want to have truly long-running services that are not tied to a user session. My advice is, if you want long-running services, and you have access, then use sudo
.
The answer to the second one is actually very important because if the same then there is a huge attack vector that can wipe sections of the network
Yes when general, but if you know who its being given to then it can be tailored
Can I ask as a further question, where is the data containing the peerID kept? And how is the node when starting told to use that peerID?
I don’t know all the details, but, the peer ID is constructed from the public key in the node’s keypair. That is in the data directory for the node. So if you start the node by pointing it to an existing data directory, you will get the same peer ID each time.
graph of network size over the last week as seen from all my systems.
the truth is there if you read between the lines
A simple ';'
could be all you need here. eg.
sudo /root/.local/bin/safenode-manager add --count 2 --owner eddde --home-network ; sudo /root/.local/bin/safenode-manager start --interval 20000
It’s still 2 distinct commands but on one line. Crucially ‘;’ waits for the first command to complete before running the next one. Test it with:-
sleep 5 ; echo Hi
Is it though? I don’t believe the max number of 512KB records has increased beyond 2048. I think the other 3GB of the 5GB recommended is for logs.
The record size is 1/2MB max and there is 4096 max records ==> 2GB max
And yes the 3GB is to allow space for logs to grow
Until they test out larger node sizes it seems this will be size going forth since quite a while ago.
I don’t get it. Are you saying the network hasn’t declined from ~75k nodes to ~35 nodes? Are you saying the ~35k are the upgraded nodes?
thats the network size as seen by my nodes. old nodes can still see the network as this was an upgrade I have no idea how many of each version is on it.
There is no reason that all nodes wouldn’t still see each other. Otherwise we would be seeing huge problems with segmentation etc.
My current nodes claim to be seeing 100+K if one is to believe the metrics output
sn_networking_estimated_network_size 110592
I read the Rust code to clarify myself where the peer-id would be retained:
Peer-id is converted to a value that is stored in the secret-key file in each node dir on disk. There is a 1:1 relation between this value and peer-id string. If there is no secret-key yet, a new one is randomly generated. That’s all libp2p stuff.
Now I need to know more about the xor address and if it is really changing or not.
This kind of extremely fundamentally important stuff needs to be documented, in addition to shunning criteria and so on. Lot’s of reverse engineering work for someone not familiar with rust and the asynchronous nature makes et even harder. A state machine diagram would be great!
Yes this agrees with what @chriso says. As long as you start your node pointing to the data directory then it uses that peerID
This is the more important issue, much more so than peerID as that is mainly a naming system. I may need to get into the code myself. I’d love for the xor address to be placed in the logs during node startup, there isn’t a reason not to.
Apart from any attack vector, one big issue is if it keeps the same xor address then it’ll be getting shunned while its off line. And if there is some mitigation against that then the attack vector is guaranteed.
It’ll come I am sure
Something seems to have happened at 1300 today:-
Is this reflected in the estimate of the number of nodes?
Hi @neo
I wanted to get back to you regarding the question of the XOR address. I asked David, and it led to quite a bit of discussion.
The answer is, yes, the XOR address is retained between restarts, because it is related to the peer ID, which is retained. I don’t know exactly how they are related, so I can’t tell you any more technical detail on that, I’m sorry. But I was told they are.
David was not happy about the way this has been implemented, and we are going to redesign, such that the secret key, and hence address and peer ID, will not be retained between restarts. That’s where we are at now, so I can’t really provide any more detail.
Definitely needs to change. Which of course means the chunks held will become inactive and should be offered up if somehow they don’t exist on the network. This would possibly be the case if people upgraded their nodes too close to each other. Like on the stoke of midnight style of upgrade. Or major outage affecting huge regions, or the rare but certain case where one region just happens to have all 5 of the nodes holding a chunk(s) suffered an outage.
Problems with xor being the same range from