safe-cli is a command-line interface (CLI) for interacting with the SAFE Network, which is a decentralized platform for data storage and communication. Here are some basic steps for using safe-cli:
Install safe-cli on your system. You can install it using the following command:
Create an auth key. An auth key is a digital identity that you can use to access the SAFE Network. To create an auth key, run the following command:
safe auth new
Follow the prompts to create a new auth key. You will be asked to enter a name for the key and a passphrase to protect it. Make sure to choose a strong passphrase and remember it, as you will need it to use the key later.
Use the safe command to interact with the SAFE Network. Here are some examples of things you can do with safe-cli:
List the files in your root directory on the SAFE Network: safe files ls
Upload a file to the SAFE Network: safe files put /path/to/local/file
Download a file from the SAFE Network: safe files get /path/to/remote/file /path/to/local/destination
For a complete list of commands and options available with safe-cli, you can run safe --help or consult the SAFE Network documentation at https://docs.safedev.org/.
Here are some additional resources that you might find helpful:
Documentation will become much better for us soon too
what do I need to do to build a node for the safe network and run a testnet?
To build and run a node for the SAFE Network, you will need to follow these steps:
Install Rust: The SAFE Network is built using the Rust programming language, so you will need to install it on your machine first. You can do this by following the instructions on the Rust website (Install Rust - Rust Programming Language).
Clone the SAFE Network repository: Next, you will need to clone the SAFE Network repository from GitHub. You can do this by running the following command:
Build the SAFE Network: Once you have cloned the repository, navigate to the root of the repository and run the following command to build the SAFE Network:
cargo build --release
Run a testnet: To run a testnet, you will need to start the SAFE Network node by running the following command:
cargo run --release -- --test-network
This will start a testnet instance of the SAFE Network, which you can use to test your applications and integrations.
Note that these are just the basic steps to get started with the SAFE Network. There are many more advanced options and configurations that you can use to customize your node and testnet setup. You can find more information in the SAFE Network documentation (https://docs.safedev.org/).
You think this model will be free? You’re putting in work as testers helping hone a product without remuneration. Bait and switch at some point, because that’s how this works.
There are open source models not far behind. Maybe help those instead.
The fact this is the level of tech these days is fascinating. It’s a big issue for so many reasons. Don’t know how free it will be. They have pricing, but I would pay for this, actually. If others are more open and free, then … even better.
To me the realisation of this level of output is astonishing. It will soon remove many barriers. For instance, in those questions above you can just say write rust/python etc. code to create whatever app you can dream of and get pretty close to implementation.
meantime its looking like a fair swap for now - based on less than an hour working with it.
And if we can teach a bot how to get folk to interact with SAFE then its win-win.
ChatGPT is only a mere demo for the moment, and one has no information about a future pricing or SLA… It is interesting to compare ChatGPT to open-source alternatives: GPT-J, GPT-NeoX, OPT, and Bloom. And no doubt that new open-source AI models are going to be released soon, with even better accuracy.
I will ask it to write itself, though I already asked it to write a cmd prompt in python and save conversations automatically to allow me to bypass the web interface. It worked perfectly
Write python code to access openai and use threaded conversations. Save conversations in a text file named using the initial prompt in a directory called gtp
If it stalls tell it to continue (servers are overloaded)