RewardNet [04/09/23 Testnet] [Offline]

After a successful DialNet we’re looking to test our reworked payment process. Now instead of badly guesstimating costs on the network, we ask each and every node how much they’d like and pay that. (This can be tweaked to avoid bad actors easily enough). As such, every node will receive rewards for PUTs directly!

(Note: you won’t get a reward for every chunk you’re expected to hold, they are only paid at PUT time, not replication, to keep things simple… but you can expect to be rewarded on an ongoing basis as you accept new chunks and the network grows.)

As well as this, we’ve a bit more logging around putting data, together with a new --concurrency option on the CLI which allows users to decide how much work to do concurrently when they upload data. The default -c value is 5, more will upload more chunks and payments at the same time. The same can be set for downloads. High values here will require more client resources.

eg: safe files -c 10 upload <dir> will set a concurrency value of 10 for the client.

Here we’re aiming to:

  • Verify the stability of pay-each-holder approach
  • Verify Node Rewards are being received and can be queried
  • Shake out any bugs in this process

Network Details

Node version: 0.89.16
Client version: 0.81.16
Faucet url: 139.59.181.245:8000
SAFE_PEERS: /ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib
Alternatives:

"/ip4/159.65.7.57/tcp/35367/p2p/12D3KooWL7A4j2PoZyXaiwhPxZaHcPvjNfejNgQL1BBeqV3S6bZW"
"/ip4/164.92.213.254/tcp/37491/p2p/12D3KooWJWvSH6cmg1HfskNB4PPkdfpnpqn9Z1MTtZ9ZvEEfx9pf"
"/ip4/146.190.136.181/tcp/35307/p2p/12D3KooWBWHFijhdKD2BmCqm3bn5oChXGJM1Xm2n8qw9GK6tvsD7"
"/ip4/68.183.84.0/tcp/41255/p2p/12D3KooWA1KDD7cyVzcwuPkCG4PjH4y9FzUf1qbd2mFeFK2BGJFU"
"/ip4/68.183.95.91/tcp/39401/p2p/12D3KooWMTDY3NdCB6TeQT3ZE3MHTjU9i9gJz552r7UUawi1wa96"
"/ip4/178.128.209.237/tcp/33447/p2p/12D3KooWJvW634Lfd7b5N4vRievDEXaxtnSR5msLm9WiHAKsVa34"
"/ip4/146.190.136.181/tcp/36513/p2p/12D3KooWCtPdWgn82hSVTqW7kYGhVx6SED75uEcziiWtTZtS9CiJ"
"/ip4/24.199.126.17/tcp/34181/p2p/12D3KooWMzWbX6oZtNidygRYwDwD2ykaosQsDk6m7GebrjWhDac3"
"/ip4/164.92.213.254/tcp/39861/p2p/12D3KooWGpXcWh7mhcKHXiSZRQE1cAkqpHMEB82SvqsJKF8VGdqP"

We have 101 droplets running a total of 2001 nodes. One droplet has 2vcpu and 4GB of memory.


If you are a regular user, see the ‘quickstart’ section for getting up and running.

If you are a first-time user, or would like more information, see the ‘further information’ section.


Quickstart

If you already have safeup, you can obtain the client and node binaries:

safeup client --version 0.81.16
safeup node --version 0.89.16

Run a Node

Linux/macOS:

export SAFE_PEERS="/ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib"
SN_LOG=all safenode

Windows:

$env:SAFE_PEERS="/ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib"
$env:SN_LOG = "all"; safenode

Check local node’s reward balance

Your local node’s peer id will be printed to the terminal on startup with an example command). (You can also retrieve this from the node directory.)

safe wallet balance --peer-id="<local-node-peer-id>"

Connect to the Network as a Client

Linux/macOS:

export SAFE_PEERS="/ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib"
safe wallet get-faucet http://139.59.181.245:8000
safe files upload <directory-path>

Windows:

$env:SAFE_PEERS = "/ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib"
safe wallet get-faucet http://139.59.181.245:8000
safe files upload <directory-path>

Further Information

You can participate in the testnet either by connecting as a client or running your own node.

Connecting as a client requires the safe client binary; running a node requires the safenode binary.

Obtaining Binaries

We have a tool named safeup which is intended to make it easy to obtain the client, node, and other utility binaries.

Installing Safeup

On Linux/macOS, run the following command in your terminal:

curl -sSL https://raw.githubusercontent.com/maidsafe/safeup/main/install.sh | bash

On Windows, run the following command in a Powershell session (be careful to use Powershell, not cmd.exe):

iex (Invoke-RestMethod -Uri "https://raw.githubusercontent.com/maidsafe/safeup/main/install.ps1")

On either platform, you may need to restart your shell session for safeup to become available.

Installing Binaries

After obtaining safeup, you can install binaries like so:

safeup client # get the latest version of the client
safeup client --version 0.81.16 # get a specific version

safeup node # get the latest version of the node
safeup node --version 0.89.16 # get a specific version

safeup update # update all installed components to latest versions

When participating in our testnets, it is recommended to use a specific version. In our project we release a new version of the binaries every time we merge new code. This happens frequently, so over the lifetime of a testnet, many new releases will probably occur. So for participating in this particular testnet, you may not want the latest version.

The binaries are installed to ~/.local/bin on Linux and macOS, and on Windows they go to C:\Users\<username>\safe. Windows doesn’t really have a standard location for binaries that doesn’t require elevated privileges.

The safeup tool will modify the PATH variable on Linux/macOS, or the user Path variable on Windows. The effect of this is that the installed binaries will then become available in your shell without having to refer to them with their full paths. However, if you’re installing for the first time, you may need to start a new shell session for this change to be picked up.

Running a Node

You can participate in the testnet by running your own node. At the moment, you may not be successful if you’re running the node from your home machine. This is a situation we are working on. If you run from a cloud provider like Digital Ocean or AWS, you should be able to participate.

You can run the node process like so:

# Linux/macOS
SN_LOG=all safenode

# Windows
$env:SN_LOG = "all"; safenode

This will output all the logs in the terminal.

Sometimes it will be preferable to output the logs to file. You can do this by running the node like so:

# Linux/macOS
SN_LOG=all safenode --log-output-dest data-dir

# Windows
$env:SN_LOG = "all"; safenode --log-output-dest data-dir

The location of data-dir is platform specific:

# Linux
~/.local/share/safe/node/<peer id>/logs

# macOS
/Users/<username>/Library/Application Support/safe/node/<peer id>/logs

# Windows
C:\Users\<username>\AppData\Roaming\safe\node\<peer-id>\logs

If you wish, you can also provide your own path:

# Linux/macOS
SN_LOG=all safenode --log-output-dest <path>

# Windows
$env:SN_LOG = "all"; safenode --log-output-dest <path>

The advantage of using the predefined data-dir location is you can run multiple nodes on one machine without having to specify your own unique path for each node and manage that overhead yourself.

Connecting as a Client

You can use the safe client binary to connect as a client and upload or download files to/from the network.

To connect, you must provide another peer, in the form of a multi address. You can find one in the ‘Network Details’ section at the top.

It is recommended to set the peer using the environment variable SAFE_PEERS. You can set this variable once and it will apply for the duration of your shell session:

# Linux/macOS
export SAFE_PEERS="/ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib"

# Windows
$env:SAFE_PEERS = "/ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib"

NOTE: If you close and/or start a new shell session, you will be required to redefine this environment variable in the new session.

As an alternative to the environment variable, it’s also possible to use the --peer argument:

safe --peer="/ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib" ...

However, this requires specifying the peer with each command.

Using the Client

You’ll first need to get some Safe Network Tokens:

safe wallet get-faucet http://139.59.181.245:8000

You can now proceed to use the client, by, e.g., uploading files:

safe files upload <directory-path>

To download that same content:

safe files download

This will download the files to the default location, which is platform specific:

# Linux
~/.local/share/safe/client/downloaded_files

# macOS
/Users/<username>/Library/Application Support/safe/client/downloaded_files

# Windows
C:\Users\<username>\AppData\Roaming\safe\client\downloaded_files

To download to a particular file or directory:

safe file download [directory/filename] [XORURL]

Troubleshooting

Cleanup

If you’ve used previous versions of the network before and you find problems when running commands, you may want to consider clearing out previous data (worthless DBCs from previous runs, old logs, old keys, etc.).

# Linux
rm -rf ~/.local/share/safe

# macOS
rm -rf ~/Library/Application\ Support/safe

# Windows
rmdir /s C:\Users\<username>\AppData\Roaming\safe

If you encounter a problem running any of our binaries on Windows, it’s possible you need the Visual C++ Redistributable installed.

30 Likes

Foist and so I am!

12 Likes

Another testnet already – Awesome! Uploading now!

14 Likes

We are being spoiled so much, didn’t even comment on how great DailNet was for upload/download

Keep testing super ants :rofl:

18 Likes

Fantastic I’m on night shift so will come join in tonight :slight_smile:

13 Likes

Lots of juicy info now

root@localhost:~# safe files upload atom.mp3
Built with git version: 8faf662 / main / 8faf662
Instantiating a SAFE client...
🔗 Connected to the Network                                                                                                                       Preparing (chunking) files at 'atom.mp3'...
Making payment for 10 Chunks that belong to 1 file/s.
Transfers applied locally
After 18.828787344s, All transfers made for total payment of Token(160) nano tokens.
Successfully made payment of 0.000000160 for 1 records. (At a cost per record of Token(160).)
Successfully stored wallet with cached payment proofs, and new balance 99.999999776.
Successfully paid for storage and generated the proofs. They can now be sent to the storage nodes when uploading paid chunks.
Preparing to store file 'atom.mp3' of 4607999 bytes (10 chunk/s)..
Starting to upload chunk #9 from "atom.mp3". (after 0 seconds elapsed)
Starting to upload chunk #0 from "atom.mp3". (after 0 seconds elapsed)
Starting to upload chunk #1 from "atom.mp3". (after 0 seconds elapsed)
Starting to upload chunk #2 from "atom.mp3". (after 0 seconds elapsed)
Starting to upload chunk #3 from "atom.mp3". (after 0 seconds elapsed)
Uploaded chunk #9 from "atom.mp3" in 10 seconds)
Starting to upload chunk #4 from "atom.mp3". (after 10 seconds elapsed)
Uploaded chunk #2 from "atom.mp3" in 11 seconds)
Starting to upload chunk #5 from "atom.mp3". (after 11 seconds elapsed)
Uploaded chunk #0 from "atom.mp3" in 11 seconds)
Starting to upload chunk #6 from "atom.mp3". (after 12 seconds elapsed)
Uploaded chunk #3 from "atom.mp3" in 11 seconds)
Starting to upload chunk #7 from "atom.mp3". (after 12 seconds elapsed)
Uploaded chunk #1 from "atom.mp3" in 12 seconds)
Starting to upload chunk #8 from "atom.mp3". (after 12 seconds elapsed)
Uploaded chunk #5 from "atom.mp3" in 6 seconds)
Uploaded chunk #4 from "atom.mp3" in 11 seconds)
Uploaded chunk #7 from "atom.mp3" in 11 seconds)
Uploaded chunk #8 from "atom.mp3" in 11 seconds)
Uploaded chunk #6 from "atom.mp3" in 12 seconds)
Uploaded "atom.mp3" in 24 seconds
Successfully stored 'atom.mp3' to b5a8285f1b16013c4f44b8dc901031e2c93b75bcfd0ec283c4092114a529c972
Writing 56 bytes to "/root/.local/share/safe/client/uploaded_files/file_names_2023-09-04_17-50-04"
15 Likes

When uploading, does the client upload the 5? copies (so 5 separate uploads for each chunk) or is the first copy just replicated by the nodes?

I notice that at least when I’m not using the no-verify, that I’m uploading a huge amount of data, but my 11 test files are all just small files, so it seems very odd. I can imagine that there is a fair bit of downloading to verify but don’t understand why there is so much uploading happening.

7 Likes

This is insane progress! It feels like we are on a quick ascent, and already the Safe Network summit is peeking out of the mist before us. Cheers

13 Likes

I am getting rewards!!! Rewards balance: 0.000000004 :partying_face: :partying_face: :boom:
[insert expletive] FANTASTIC

:boom: :boom: :boom: :boom: :boom: :boom: :boom: :boom: :boom: :boom: :boom: :boom:

17 Likes

It’s just amazing.

I’m not able to test anything at the moment.

Not used to seeing sooo many test nets. Just one after another. Brilliant work.

18 Likes

My node seems to stall right at the beginning, the logfile shows only these few lines:

[2023-09-04T19:36:26.580481Z INFO safenode] 
Running safenode v0.89.16
=========================
[2023-09-04T19:36:26.580494Z DEBUG safenode] Built with git version: 8faf662 / main / 8faf662
[2023-09-04T19:36:26.580497Z INFO safenode] Node started with initial_peers ["/ip4/139.59.181.245/tcp/37569/p2p/12D3KooWPHE8qcKL4CB2n8QvPpE25TRsP9nmkfeM6Qa61aAsokib"]
[2023-09-04T19:36:26.581221Z INFO safenode] Starting node ...
[2023-09-04T19:36:26.582038Z INFO sn_networking] Node (PID: 4119) with PeerId: 12D3KooWQRRdsFPFD2T1kavUvYk4CtRQ4yBWw9Cw444KgiaSem1p
[2023-09-04T19:36:26.582051Z INFO sn_networking] PeerId: 12D3KooWQRRdsFPFD2T1kavUvYk4CtRQ4yBWw9Cw444KgiaSem1p has replication interval of 398.69626607s
[2023-09-04T19:36:26.582084Z DEBUG sn_networking] Using Kademlia with NodeRecordStore!
[2023-09-04T19:36:26.582591Z DEBUG sn_networking] Preventing non-global dials
[2023-09-04T19:36:27.314804Z DEBUG sn_logging::metrics] {"physical_cpu_threads":8,"system_cpu_usage_percent":1.5873017,"system_total_memory_mb":8251.675,"system_memory_used_mb":2101.2603,"system_memory_usage_percent":25.464653,"process":{"cpu_usage_percent":1.4109347,"memory_used_mb":15.028224,"bytes_read":0,"bytes_written":0,"total_mb_read":0.0,"total_mb_written":0.077824}}
1 Like

Tinkering with the c value. Uploading a 5mb files of random data, different file each time.

with c == 5 takes 29 seconds (after payment).

with c == 10 it takes 17 seconds

with c == 12 it takes 16 seconds

Thereafter with c== 20, 30, 40, 50 … there’s no improvement.

But if you upload the same file twice, the stated upload times and payment times are quicker the second time.

root@localhost:/home/user/safe-network# safe files upload -c 10 5mb9
After 18.224551441s, All transfers made for total payment of Token(176) nano tokens.
Uploaded "5mb9" in 17 seconds

Second time

After 12.793562192s, All transfers made for total payment of Token(176) nano tokens.
Uploaded "5mb9" in 8 seconds

All manual tinkering so far and nothing at scale, but uploading the same file always seems to be quicker the second time. Third time is usually same as second time. Transaction times seem more variable.

This variation in upload times could be a way to find out if a file has already been uploaded - not sure that matters, but I seem to remember it was discussed before.

9 Likes

I do have one node (out of 16) that belongs in DialNet it is not getting any records! @joshuef

2 Likes

It would be more like: you upload, but as soon as your msg is away we start verifying that it exists… but as it already does exist, well that’s very fast.


DialNet or RewardNet? They are not compatible.

4 Likes

Not even more metrics logs? Is the process still alive?

4 Likes

8 copies atm. The CLOSE_GROUP_SIZE is still 8 (we’ll be testing a reduction there assuming there’s nothing explosive in this testnet)

no verify or no, the data uploaded should be the same… Although saying that, verify does retry puts if things can’t be verified after time. So there would be more uploading there.

7 Likes

And that’s included in the quoted upload time?

3 Likes

Aye, I’m not sure how much of that info is useful atm.

I figured we’d put in what we could and see what folk find useful.

Definitely feels overkill, but I think also illustrates the concurrency well? :person_shrugging:

4 Likes

With verify on yeh, we don’t proceed to the next chunk until the previous has been uploaded and verified.

4 Likes