Hey, guys have been playing around with Docker and it’s possible to create a lightweight Safe node using alpine (light Linux distro) and some dependencies + the installation script from the maidsafe GitHub.
Docker runs images inside of containers and this makes running it on any OS very simple. You only need to install docker which is widely available and optionally build from the source or download a generated docker image and directly run it. Some NAS like Synology also supports running docker images so it will directly spin up a Safe Node on your NAS if it supports it!
Steps to do:
Install docker on your operating system
Create a folder with a project name
Copy the following code to the file that will be called ‘dockerfile’
# Build SafeNetwork Docker container
FROM alpine:latest
LABEL version="0.1"
LABEL maintainer="DeusNexus"
LABEL release-date="2021-01-31"
# Update and install dependencies
RUN apk update
RUN apk add bash #unix shell to run install script
RUN apk add curl #cUrl to transfer data
#Make profile file with exported PATH and refresh the shell (while building)
SHELL ["/bin/bash", "--login", "-c"]
RUN echo 'export PATH=$PATH:/root/.safe/cli' > ~/.profile && source ~/.profile
#Set ENV PATH (after build will be used to find 'safe')
ENV PATH=$PATH:/root/.safe/cli
#Installation Script - MaidSafe installation script
RUN curl -so- https://sn-api.s3.amazonaws.com/install.sh | bash
#Install Safe - During Build
RUN safe node install
RUN safe auth install
#Expose PORT of the node
EXPOSE 12000
#Run command on Docker launch
CMD ["safe"]
Build the docker image with docker build -t safe_node . while in the project folder with the dockerfile (My image is built with Docker version 20.10.2, build 2291f61)
This will download all the dependencies and use the installation script to install safe.
After the image is built you can spin it up any time using docker run -it safe_node.
It will open up into the safe_cli interface, you can use --help to see the commands. Just note that I’m still fixing some bugs bug in general it seems to be decently working.
Things to add are a VOLUME basically the external mounting point that the docker image will use to store data and make the data persistent!
Cool things possible to add are simple HTTP GUI that could restart the server with buttons, display the state, and various things using express server and exposed apache server.
Yes I have seen that error before when cut n pasting from the screen - As ever I forget about it until reminded - thank you
I’m still getting the JSON error though
➜ docker docker build -t safe_node .
Sending build context to Docker daemon 2.56kB
Error response from daemon: Dockerfile parse error line 13: SHELL requires the arguments to be in JSON form
➜ docker
@DeusNexus you should put the stuff from “Copy the following code to the file that will be called ‘dockerfile’” inside of a code block in your post, like this:
```text
All my
Nice text
```
Becomes
All my
Nice text
This prevents Discourse from doing formatting on the text, in this case for instance “--” became “–” which broke the script when copied and pasted.
checkout Hermitux/Hermitcore Unikernel from Virginia Tech Uni , supports K or Docker container provisioning. You will need to re-package to include all the libs required by the Safenode, you can run the binary of the node as is , uses ‘trampoline’ jmp to replace all relevant syscalls to the libs in question, meaning you can load any app binary in the container at just runs provided you re-package w/ requisite libs for the app your loading …, very lean, supports Intel AMD64 or ARM64 which means in the latter case you can get it to run on RapsberryPi or Beagle or similar…
What kind of error are you getting? Is it still saying you need to provide a JSON array of arguments?
Take note I also still get some error while trying to run the authd but the safe cli interface with commands is working.
I totally missed this topic while I sought to do the same over the last few days.
I had a different goal in mind (for development). I wanted to run a full test network within the container plus a pre-authorized CLI.
One challenge I struggled with is the networking part. I resorted to --network=host, which causes the container to have full access to the network interface of the host and thus allowing the nodes/network to function and communicate with local clients. I could not get it to work without it. But, I just found out Windows does not support --network=host (actually only Linux).
Perhaps someone more enlightened than me can chime in with possible solutions. Instead of --network=host I tried --publish 127.0.0.1:12000-12011:12000-12011/udp. I run all the nodes on port 12000 to 12011, so I assumed the network was then accessible on the host, but clients can’t connect. I don’t know enough about bootstrapping/networking to see what is wrong or how to fix it.