I had some free time this weekend, so I finally put some effort in an alpine container on automating the setup of rust environment, compile the safe_network binaries, and then attempt to launch a local network, so I could start peeking at the current json log format ahead of the next official testnet.
In this process, I am getting the following CLI error but it successfully puts a ‘test.txt’ file containing ‘TEST’ as the content on the local testnet:
safe-node:/mnt/remoteceph/safe_uploads# RUST_LOG=safe=info safe files put ./test.txt
2023-03-04T17:33:06.483126Z INFO main safe::operations::auth_and_connect: No credentials found for CLI, connecting with read-only access...
FilesContainer created at: "safe://hyryyryzz4rghiq3ydapm5nt7ox7cfyk8x4jg7cnb6ep4xsjrrsiogz8b1enra?v=h1cpco46zsc83reio7oserm9oaaudqohan3m6zyrmbcteysho8x3y"
+---+------------+--------------------------------------------------------------------+
| + | ./test.txt | safe://hy8oycyxqtrfd1fqitfnfkm1jbyyf1wzdjxz534ejxhi7je7iahud8qzmuo |
+---+------------+--------------------------------------------------------------------+
safe-node:/mnt/remoteceph/safe_uploads# RUST_LOG=safe=info safe cat safe://hy8oycyxqtrfd1fqitfnfkm1jbyyf1wzdjxz534ejxhi7je7iahud8qzmuo
2023-03-04T17:33:20.644856Z INFO main safe::operations::auth_and_connect: No credentials found for CLI, connecting with read-only access...
TEST
What is required for the safe CLI to not show the above error message?
Is there certain files that need to be placed in say $HOME/.safe/ that I might have overlooked?
If its connecting as read-access only, is it expected to able to perform a write operation against the network?
Side Notes:
I tried to not be dependent on safe node run-baby-fleming command here, as that might be getting replaced with safeup (possibly)
I wanted to be able to pass more advanced options to the sn_node when spinning up a local test network
I picked powershell .net core as its cross platform, even though I don’t run Windows at home
Version: sn_cli 0.72.2 & sn_node 0.77.10
Sample code (work in progress) based off existing community testnet scripts: safe_node_startup (.ps1)
I would appreciate any further clarification or suggestions on the CLI error message above, thank you!
I would just wait for refactor as any script is gonna be out of date in a few days/week when we will have renamed sn_node to safenode and install script safeup and of course a new release of safe that will work with those.
of the top of my head I think you need to create once keys safe keys create --for-cli
Dug a little deeper, seems this is covered in sn_cli readme.md here:
It's possible to use safe without a keypair, but in this case, it will generate a new one for each command. So if you uploaded some files, the owner of them would be assigned the one-time keypair, and effectively they would be read-only. If you want to subsequently write to them, you need to use the same keypair.
My initial thought was just to have run-baby-fleming as a command for safe, which would be safe run-baby-fleming, but it may actually be a good idea to make it a safeup command instead. I’ll consider it. Thanks for the idea.
I decided to go down the path of writing custom scripts to bootstrap the network outside of a single simple run-baby-flemining or safeup command for a few reasons:
Flexibility to decide if each sn_node runs in same container or VM, or different containers or VMs
Performance monitoring for overall memory, i/o, and cpu resources is already built in on a high level for any LXC or VM spin up in my current setup
Being able to spin up 15 sn_node containers on say 15 physical hosts, and conduct performance based stress tests while providing each container equal physical resources could be useful down the road
Ultimately, at home, I would like to see this type of workflow:
Master configuration file would include:
Github URL + branch or tag specified
Number of sn_nodes required
Total number of physical hosts to sn_nodes container ratio desired (1:1 or 1:2 etc)
Use terraform to spin up a container + post-hook shell scripts to build out the safe and sn_node binaries
Place safe and sn_node binaries along with other custom dependencies including telegraf, otlp, rust toolkits with their own proper configuration on a fresh LXC, and re-package it as a new safe LXC golden image
Use terraform to spin up containers against existing physical hosts as is using the new LXC image
Start up the local sn_node pids passing in appropriate args to genesis node vs the remaining nodes
The above powershell .net core script was written quickly just to confirm whether the bare minimum steps if done in the right sequence would work or not, without being dependent on safe-run-baby-fleming or safeup.
When the real network launches, I would simply then use the latest golden LXC image, and spin it up and have it target the real network as oppose to a testnet, and it be easy to scale up the number of nodes, if desired at home.
Overall, during development phase, for most folks, I think using run-baby-fleming and running all the sn_node pids on one local machine is likely enough, but I worry if the requirement changes from 15 → 33 sn_node pids down the line with split sections, and more stress tests being performed, one may need a much beefier single machine (storage / ram / cpu).
You may be interested in looking at the sn_testnet_tool repository, particularly under the aws directory, which is work I just done recently:
There is a directory above the AWS one for deploying testnets on Digital Ocean, but it’s older and a bit more cobbled together over time.
Containers might be quite useful in the scenarios you suggest for packaging safe or safenode alongside other tools, but I think otherwise I’m not sure there’s a great deal of benefit in using them for a node deployment. Since the binary is written in a compiled language like Rust, it already comes with almost no dependencies. It’s an interesting scenario though, nonetheless.
Thanks for the link above, I appreciate it, and will check it out. I am actively using ansible + terraform fairly routinely against my local infrastructure since everything runs on existing hypervisors at home.
I am really trying hard to avoid using cloud providers, such as AWS, Azure, etc for home use / home projects, . I find it more rewarding setting up the infrastructure or services on my own time using home resources, and therefore learning a lot more in that process about that particular technology or product.
I agree with you that if folks don’t have any other tools or external services to setup alongside sn_node then the above workflow is overkill, but I am pretty sure I will like to have other tools or services combined with sn_node to form a robust alerting, monitoring, and recovery solution as a future node operator, .