Using node manager to run a local test network

I’m trying to run a local test network to try out the examples in the safe_network README. First question: should this work or am I jumping the gun?!

I’m using the latest rust and safe_network (just pulled) code but having some issues (using these instructions).

Starting the network takes a while but is reliable. Other commands can be flaky though - typically failing with “permission denied” - for both the status command (see below) and getting from the faucet. Then trying carefully from scratch getting from the faucet worked and I went on to the register example, but getting status has never worked. This happens every time:

$ cargo run --bin safenode-manager --features local-discovery -- status
    Finished dev [unoptimized + debuginfo] target(s) in 0.29s
     Running `target/debug/safenode-manager status`
Error: 
   0: Permission denied (os error 13)

Location:
   sn_node_manager/src/config.rs:21

Register example

After getting the faucet command to work I have been able to create the register and write to it as alice, but the time it took to create and get to the text input was long (minutes).

Once created alice seemed able to write and update the text quite quickly, but in the second terminal bob sees a blank, no text from alice. Maybe this is expected behaviour: both appear able to write but neither sees the text from the other, only their own text?

Here’s the output of alice’s terminal:

Starting SAFE client...
SAFE client signer public key: PublicKey(019e..2214)
Retrieving Register 'myregister' from SAFE, as user 'alice'
Register 'myregister' not found, creating it at 50f4c9(a31104(10100011)..)
Successfully made payment of 0.000000011 for a Register (At a cost per record of NanoTokens(11).)
Successfully stored wallet with cached payment proofs, and new balance 99.999999989.
Register address: "50f4c9d55aa1f4fc19149a86e023cd189e509519788b4ad8625a1ce62932d193a19ef12a469a58eb81ab145dfa8cd55b6776746b7e56ac8b516b1421edca34ff621033b7eae13ce8040d14dd0935af95"
Register owned by: PublicKey(019e..2214)
Register permissions: Permissions { anyone_can_write: true, writers: {PublicKey(019e..2214)} }

Current total number of items in Register: 0
Latest value (more than one if concurrent writes were made):
--------------
--------------

Enter new text to write onto the Register:
hi, alice here!
Writing msg (offline) to Register: 'hi, alice here!'
Syncing with SAFE in 2s...
synced!

Current total number of items in Register: 1
Latest value (more than one if concurrent writes were made):
--------------
[alice]: hi, alice here!
--------------

Enter new text to write onto the Register:
are you there?
Writing msg (offline) to Register: 'are you there?'
Syncing with SAFE in 2s...
synced!

Current total number of items in Register: 2
Latest value (more than one if concurrent writes were made):
--------------
[alice]: are you there?
--------------

Enter new text to write onto the Register:
hey bob are you therer?
Writing msg (offline) to Register: 'hey bob are you therer?'
Syncing with SAFE in 2s...
synced!

Current total number of items in Register: 3
Latest value (more than one if concurrent writes were made):
--------------
[alice]: hey bob are you therer?
--------------

Enter new text to write onto the Register:

Writing msg (offline) to Register: ''
Syncing with SAFE in 2s...
synced!

Current total number of items in Register: 4
Latest value (more than one if concurrent writes were made):
--------------
[alice]: 
--------------

Enter new text to write onto the Register:

And here’s bob’s although he typed ‘mark’ by mistake:

Starting SAFE client...
SAFE client signer public key: PublicKey(1198..8741)
Retrieving Register 'myregister' from SAFE, as user 'mark'
Register 'myregister' not found, creating it at 50f4c9(933012(10010011)..)
Successfully made payment of 0.000000011 for a Register (At a cost per record of NanoTokens(11).)
Successfully stored wallet with cached payment proofs, and new balance 99.999999978.
Register address: "50f4c9d55aa1f4fc19149a86e023cd189e509519788b4ad8625a1ce62932d193b198cd804b1177db21bcf81d9ca8a8641d75db2c6da821aef8f5193f861859ec38438a833c20e36bf59a83be4d73ae12"
Register owned by: PublicKey(1198..8741)
Register permissions: Permissions { anyone_can_write: true, writers: {PublicKey(1198..8741)} }

Current total number of items in Register: 0
Latest value (more than one if concurrent writes were made):
--------------
--------------

Enter new text to write onto the Register:
not sure
Writing msg (offline) to Register: 'not sure'
Syncing with SAFE in 2s...
synced!

Current total number of items in Register: 1
Latest value (more than one if concurrent writes were made):
--------------
[mark]: not sure
--------------

Enter new text to write onto the Register:

Writing msg (offline) to Register: ''
Syncing with SAFE in 2s...
synced!

Current total number of items in Register: 2
Latest value (more than one if concurrent writes were made):
--------------
[mark]: 
--------------

Enter new text to write onto the Register:

11 Likes

Thanks for trying the node manager!

With respect to the issue about the status, that is a bug, and I know the cause–will get a fix in today. I’ll report back in this thread when it’s in.

Regarding the registers, you may have picked up an old version of the documentation, because we’re no longer advising the use of two terminals for that example. I think the ability to edit the same register no longer applies. However, I can’t account for why it was slow.

2 Likes

Thanks Chris - no hurry. I’m using the latest safe_network doc on github, and in the latest testnet we have been editing the register as different users - so long as it is created as writeable by anyone.

I’ll have a look at the example code this afternoon. And update if the behaviour is any different (speed for example).

EDIT:

Having pulled safe_network again I see the doc for register example has changed. Would it be accepted if I were to fix the example to allow the second user to pass the register xor address as I believe that would enable it to work as originally intended, or is there a reason why you don’t want to show two users editing the same register?

EDIT:
I have a slightly improved example where each user can see the other’s updates and check for any input by the other by entering a blank line.

Everything attempting to retrieve and then creation of the register by alice is still slow but maybe that’s a feature of the network?

3 Likes

PR raised for the status bug.

With respect to the register example, I removed it because it wasn’t working as intended. Two users were supposed to be able to edit the same register, referring to it by name. However, the code was generating a different address for the register each time the example runs, even when using the same name. To be honest, I don’t know why that was the case.

When I queried about it, David said this:

To be honest, I don’t have enough knowledge on registers to comment further. I’m hoping to change that soon though, by writing some documentation for the API.

5 Likes

Sorry, I just realised I missed this question. Yeah, I think that would be ok, especially if it’s something we’ve been doing in the testnets, as you said.

1 Like

I think the reason the second user can’t get the register by name is that they each have a different pub key (because the xor address is created using the pub key and the nickname).

This also explains why the first user doesn’t retrieve a register they created earlier - because their key is regenerated on each run.

To make it work I added a parameter so a user can pass the xor address of the register (once somebody has created it). That works.

So I then modified the example slightly to enable each user to poll for updates without writing text to the register. So it’s not quite a chat room, but close enough.

If you accept a PR I’ll update the README and submit.

7 Likes

Nice work! Yeah that would definitely be useful if you submitted the PR.

3 Likes

FWIW Im getting the same error as @happybeing with
cargo run --bin safenode-manager -- status --details

Also right now I can’t get the faucet to start

Failed to claim genesis: Transfer Error Failed to send tokens due to The transfer was not successfully registered in the network: CouldNotSendMoney("Network Error GetRecord Query Error NotEnoughCopies { record_key: 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94), expected: 3, got: 1 }.").
Trying to claiming genesis... attempt 3
Loading faucet...
Loading faucet wallet... "/home/willie/.local/share/safe/test_faucet"
Loading genesis...
Sending 1288490188.500000000 from genesis to faucet wallet..

faucet.log has lots like

[2024-02-04T00:47:42.920448Z DEBUG sn_networking::get_record_handler] Getting record 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94) completed with only 1 copies received, and 1 versions., and {PeerId("12D3KooWFcLcrN2zoGiLoJ1hL4Su3y5K3E2Z81SwUuHJxEiUrLv7"), PeerId("12D3KooWNzA72FWbPrcLeTmnWVpsWLyWfRYwZjACjCvZCNkdSzEm"), PeerId("12D3KooWByGbur9HZ9TgmjSDGpn2jAnowSKRag3oFc2U6iH4yy3W"), PeerId("12D3KooWR9Ptt5Ah3v9Ks6T9W3UefqDtdTiYgjm2MZjfBDAUJ6Dx")} expected holders not responded
[2024-02-04T00:47:42.920498Z WARN sn_networking] Not enough copies (1/3) found yet for 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94).
[2024-02-04T00:47:43.440819Z INFO sn_networking] Getting record from network of 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94). with cfg GetRecordCfg { get_quorum: Majority, re_attempt: true, target_record: 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94), expected_holders: {PeerId("12D3KooWFcLcrN2zoGiLoJ1hL4Su3y5K3E2Z81SwUuHJxEiUrLv7"), PeerId("12D3KooWNzA72FWbPrcLeTmnWVpsWLyWfRYwZjACjCvZCNkdSzEm"), PeerId("12D3KooWByGbur9HZ9TgmjSDGpn2jAnowSKRag3oFc2U6iH4yy3W"), PeerId("12D3KooWR9Ptt5Ah3v9Ks6T9W3UefqDtdTiYgjm2MZjfBDAUJ6Dx"), PeerId("12D3KooWCPFGczTuCUV1iztXLBXKmmD5cE2PGC414ce8HaU1hBxM")} }
[2024-02-04T00:47:43.441044Z DEBUG sn_networking::cmd] Record 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94) with task QueryId(181) expected to be held by {PeerId("12D3KooWFcLcrN2zoGiLoJ1hL4Su3y5K3E2Z81SwUuHJxEiUrLv7"), PeerId("12D3KooWNzA72FWbPrcLeTmnWVpsWLyWfRYwZjACjCvZCNkdSzEm"), PeerId("12D3KooWByGbur9HZ9TgmjSDGpn2jAnowSKRag3oFc2U6iH4yy3W"), PeerId("12D3KooWR9Ptt5Ah3v9Ks6T9W3UefqDtdTiYgjm2MZjfBDAUJ6Dx"), PeerId("12D3KooWCPFGczTuCUV1iztXLBXKmmD5cE2PGC414ce8HaU1hBxM")}
[2024-02-04T00:47:43.442164Z INFO sn_networking::event] Current libp2p peers pool stats is NetworkInfo { num_peers: 11, connection_counters: ConnectionCounters { pending_incoming: 0, pending_outgoing: 0, established_incoming: 0, established_outgoing: 11 } }
[2024-02-04T00:47:43.442173Z INFO sn_networking::event] Removing 7 outdated live connections, still have 4 left.
[2024-02-04T00:47:43.444692Z DEBUG sn_networking::get_record_handler] For record 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94) task QueryId(181), received a copy from an unexpected holder PeerId("12D3KooWS2RfDEPe41b1FkLgSoCuDfMbmoUfBJT7LuJh2hPNisbh")
[2024-02-04T00:47:43.444928Z DEBUG sn_networking::get_record_handler] For record 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94) task QueryId(181), received a copy from an unexpected holder PeerId("12D3KooWJgD37nFNDU52r1nP9rqFNRFsTyUG3DUjJbGeoC8WYN7w")
[2024-02-04T00:47:43.450301Z DEBUG sn_networking::get_record_handler] For record 153cdd(28542e01c860f392444a2e424fa829cb5b83959cd1c4a0bd0f0b7d5f92123c94) task QueryId(181), received a copy from an unexpected holder PeerId("12D3KooWAooHxuKq9C9PHqoHo5qbQzzQUS7eTYYWhhCDaJKECKSm")

I’ll save the logs and keep poking.

2 Likes

Yeah, until it’s merged, you would need to use my branch to get the resolution for the status bug.

With the genesis claim, I think right now the only real advice is to try with a new network. So run cargo run --bin safenode-manager -- kill, then run again.

I cloned https://github.com/jacderida/safe_network.git and built in a clean dir, deleted all ~/.local/share/safe and tried again.

Got the same errors though… so just for shits n giggles as you young folk say, I tried with sudo…


willie@gagarin:~/projects/maidsafe/jacderida/safe_network$ sudo cargo run --bin safenode-manager --features local-discovery -- status
warning: /home/willie/projects/maidsafe/jacderida/safe_network/sn_client/Cargo.toml: unused manifest key `lints` (may be supported in a future version)

consider passing `-Zlints` to enable this feature.
warning: /home/willie/projects/maidsafe/jacderida/safe_network/sn_build_info/Cargo.toml: unused manifest key `lints` (may be supported in a future version)
--- lots of similar warnings


warning: /home/willie/projects/maidsafe/jacderida/safe_network/sn_registers/Cargo.toml: unused manifest key `lints` (may be supported in a future version)

consider passing `-Zlints` to enable this feature.
error: package `libp2p-swarm v0.44.1` cannot be built because it requires rustc 1.73.0 or newer, while the currently active rustc version is 1.71.1
Either upgrade to rustc 1.73.0 or newer, or use
cargo update -p libp2p-swarm@0.44.1 --precise ver
where `ver` is the latest version of `libp2p-swarm` supporting rustc 1.71.1

buuuut…

willie@gagarin:~/projects/maidsafe/jacderida/safe_network$ rustup show
Default host: x86_64-unknown-linux-gnu
rustup home:  /home/willie/.rustup

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu (default)
beta-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu
1.37.0-x86_64-unknown-linux-gnu

installed targets for active toolchain
--------------------------------------

aarch64-unknown-linux-gnu
aarch64-unknown-linux-musl
armv7-unknown-linux-gnueabihf
wasm32-unknown-unknown
wasm32-wasi
x86_64-unknown-linux-gnu
x86_64-unknown-linux-musl

active toolchain
----------------

stable-x86_64-unknown-linux-gnu (default)
rustc 1.75.0 (82e1608df 2023-12-21)

However yesterday was a very long day and I may have screwed up elsewhere.

Are you using my branch? You need to run git checkout node-manager-status-fix, then try it.

1 Like

Yep I put all your work in a seperate dir and pulled and built there last night

Checking now, I get

willie@gagarin:~/projects/maidsafe/jacderida/safe_network$ git checkout node-manager-status-fix
Branch 'node-manager-status-fix' set up to track remote branch 'node-manager-status-fix' from 'origin'.
Switched to a new branch 'node-manager-status-fix'
willie@gagarin:~/projects/maidsafe/jacderida/safe_network$ git pull
Already up-to-date.

Also we have an error in ‘help’ for faucet

willie@gagarin:~/projects/maidsafe/jacderida/safe_network$ cargo run --bin safenode-manager --features local-discovery -- help
    Finished dev [unoptimized + debuginfo] target(s) in 0.25s
     Running `target/debug/safenode-manager help`
A command-line application for installing, managing and operating `safenode` as a service.

Usage: safenode-manager [OPTIONS] <COMMAND>

Commands:
  add      Add one or more new safenode services
  faucet   Add one or more new safenode services
  kill     Kill the running local network
  join     Join an existing local network
  remove   Remove a safenode service
  run      Run a local network
  start    Start a safenode service
  status   Get the status of services
  stop     Stop a safenode service
  upgrade  Upgrade a safenode service
  help     Print this message or the help of the given subcommand(s)

Options:
  -v, --verbose...  
  -h, --help        Print help
  -V, --version     Print version

and then

willie@gagarin:~/projects/maidsafe/jacderida/safe_network$ cargo run --bin safenode-manager --features local-discovery -- start
    Finished dev [unoptimized + debuginfo] target(s) in 0.26s
     Running `target/debug/safenode-manager start`
Error: 
   0: The start command must run as the root user

Location:
   sn_node_manager/src/main.rs:622

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

Please run this from my branch:

# Make sure any existing networks are definitely dead
pgrep safenode | xargs kill -9
pgrep faucet | xargs kill -9
rm -rf ~/.local/share/safe

cargo clean
cargo run --bin safenode-manager -- run --build
cargo run --bin safenode-manager -- status

Please show me some of the output from run (don’t need all 25 nodes) and all of the output of status.

1 Like

The start command is expected to run as the root user. It doesn’t operate on a local network. It operates on services that were created with the add command, which also requires root access. I’m not seeing an error in the faucet help? What do you mean?

OK – more coffee then I will do as above and report back - thank you - excellent support on a Sunday morning :slight_smile:

1 Like

No worries, thanks for the testing. Definitely keen to see if the status bug isn’t solved for you. In that case, it will need further investigation.

2 Likes

from run

.....
Launching node 25...
Logging to directory: "/home/willie/.local/share/safe/node/12D3KooWKj2Jit2tgH1FpbWXUEACxYPbWUSZtZ6RKV1FDMRRfgnd/logs"

Node started

PeerId is 12D3KooWKj2Jit2tgH1FpbWXUEACxYPbWUSZtZ6RKV1FDMRRfgnd
You can check your reward balance by running:
`safe wallet balance --peer-id=12D3KooWKj2Jit2tgH1FpbWXUEACxYPbWUSZtZ6RKV1FDMRRfgnd`
    
RPC Server listening on 127.0.0.1:45021
Launching the faucet server...
Logging to directory: "/home/willie/.local/share/safe/test_faucet/logs"
⠂ 1/5 initial peers found.                                                                                                                                          🔗 Connected to the Network                                                                                                                                         Loading faucet...
Loading faucet wallet... "/home/willie/.local/share/safe/test_faucet"
Loading genesis...
Sending 1288490188.500000000 from genesis to faucet wallet..
Faucet wallet balance: 1288490188.500000000
Verifying the transfer from genesis...
Successfully verified the transfer from genesis on the second try.
Genesis claimed!
Starting http server listening on port 8000...

and from status

willie@gagarin:~/projects/maidsafe/jacderida/safe_network$ cargo run --bin safenode-manager -- status
    Finished dev [unoptimized + debuginfo] target(s) in 0.24s
     Running `target/debug/safenode-manager status`
=================================================
                Local Network                    
=================================================
Service Name       Peer ID                                              Status  Connected Peers
safenode-local1    12D3KooW9vhuU7f9Z54npegmA2cLA3c7rofFdXJZ8uctHVe26qiu RUNNING              24
safenode-local2    12D3KooWBRrTjGzzu3wuAf5HZh6NZ74999j5NtX2djUH3gqKhrNm RUNNING              24
safenode-local3    12D3KooWLkWG8CgXZgYNWGP7gAvRBuzcVCKX62Nu2DwAVjZwS4ZF RUNNING               9
safenode-local4    12D3KooWCkHXmJ2aSm1MQTZ4nMs3BbMY3gy1zH5kEjY6YSku9zmq RUNNING               9
safenode-local5    12D3KooWSUjFoXtVFZDaZRSs1vduWwimQSH59t92RZM4NmuRws9s RUNNING               9
safenode-local6    12D3KooWE2CDCHG1E3Di72ZCvPKX1w9PMbbwfBDiWdcuueDKBLXf RUNNING              24
safenode-local7    12D3KooWQrye1F8f5KTu7gwANvZpfqTjvtfBx9zBHSy7rMqfZ5DP RUNNING              23
safenode-local8    12D3KooWHFaZoMCUkMMwPwZHeGYNXZsjCpwfBoJ4sSDNTiuGWX9T RUNNING               9
safenode-local9    12D3KooWHXvNNexawYWZFbYjLUv9VdyVsJrAqFLJAfexGqokwAvJ RUNNING              24
safenode-local10   12D3KooWM66wMW9gg3LKANgEeRugfdf6KYWAiNnDX8v4jzE8kPre RUNNING               9
safenode-local11   12D3KooWMTe9aUj2hL6FxeHhtSA9SnkuL1MRpkPuNUFBqs69xb6E RUNNING              23
safenode-local12   12D3KooWCiJ4jjCF3jsk7zQ4LWFQSnSDypTEEYK7XeeqDyid7Mmj RUNNING              12
safenode-local13   12D3KooWAQzdR77xwoVJcELLLSvTkrJSDeegAZ9aL1AXALJzUDfH RUNNING              22
safenode-local14   12D3KooWJEUV1K9Ltt8JSS3AVdB5WUiDUJmM1DNJDovkhzkmJUMK RUNNING              24
safenode-local15   12D3KooWEeTzm1zNeVewEbtFo3Yj6H6XAogwNvYBaZENTwMNLbTG RUNNING              14
safenode-local16   12D3KooWFRDsoLqguAPPkyoKkNy8Dm1tT3WXMGTazgMibMKKuey3 RUNNING              11
safenode-local17   12D3KooWMimmLbSVNm7BadqdHZJCu4EQj5KnRMqcusVFtXp7BAfx RUNNING               8
safenode-local18   12D3KooWMnLdPpLSwyiFcrzmsVmqCySkGwdpBGf2BLCwX9bGJ4UT RUNNING               9
safenode-local19   12D3KooWMYCkxgT3iFeQwENkrLU634J3E3GRbVx4z64LC3jTx382 RUNNING              13
safenode-local20   12D3KooWJmcKmfhMqQoQ5oFBZu5WEp9EE9wpxuVn3hNjDTjPTuXK RUNNING              24
safenode-local21   12D3KooWHk7icYGnSoxH6mxhWKq6mtpttiw4XSaTMwFoSFxUVu87 RUNNING               9
safenode-local22   12D3KooWS1FhaJCA4tz9kPdeaFhFjjP3cqrrXyYKf9zmHARXKrK1 RUNNING              12
safenode-local23   12D3KooWHSdwD7ghoWzkbmH65N4MFyZHG8Bghj8D9ARs1JV1nakM RUNNING               8
safenode-local24   12D3KooWBsDgU1rvo1P7uWkijqFxT6xhmd3uGkuxMAN1xfqovte4 RUNNING               9
safenode-local25   12D3KooWKj2Jit2tgH1FpbWXUEACxYPbWUSZtZ6RKV1FDMRRfgnd RUNNING              12

So thats all looking good - I already had killed all safenode and faucet processes so Im thinking it was the complete removal of ~/.local/share/safe and cargo clean that solved it for me.

So your fix works as expected - thank you. I expect it will get merged into maidsafe/safe_network:main very soon and everyone can play.

Thanks again - off to find something else to break :slight_smile:

1 Like

Cool, glad to see it working. Btw, if you use the kill command on the node manager that will kill both the node and faucet processes.

2 Likes

image
Im guessing the add and faucet commands should have different definitions. Its a minor point…

Ah ok, thanks! Yeah I’ll get that fixed.