SAFE Network Dev Update - March 26, 2020

I think time is saying here that there were page faults and it cannot (reliably) calculate a time here - Presumably (not) for comparison with the same operation which did not generate page faults.
I have been having the evening off up till now, I’ll read your results in detail and see what I can replicate.

If you kick off your vaults with https://github.com/willief/change-nappies/blob/master/run-new-network.sh that script will timestamp your existing logs and save them to /tmp/baby-fleming-logs-for-devs/safe-vault${DATETIME}-$i.log for i in genesis 2 3 4 5 6 7

Then when you finish your tests, run that script again to save the last logs and start with a clean set of vaults.

1 Like

First baby steps graphing data using DataVoyager as suggested by @happybeing

I’ll try again with some meatier test runs and explore the possibilities of this tool

11 Likes

So, I removed my post above, as it added no value…

On reflection the order of errors spawned from duplicate files, is clearly not sorted and just a symptom of the sometimes random order of action that computers will invoke.

All is well then and no non-reproducible errors were found.

There is good result from a simple script below that uploads recursive random files.
Below might be useful later for the way it grabs logs once upload is done… but I can’t find an error just at the moment, to make use of it! :smiley:

#!/bin/bash 

# Simple script for resursive upload and capturing logs
# Spawns new set of files for upload, if an empty file doesn't exist at ./to-upload/empty

# Option to watch vaults in parrallel with
# du -sh ~/.safe/vault/baby-fleming-vaults/* | sed 's#^\([^\t]*\).*/\([^/]*\)#\2\t\1#' | sed 's/genesis/1/' | sort

## Setup
#Expects safe baby-fleming to be setup and running
mkdir ./zzz_log 2>/dev/null
mkdir ./to-upload 2>/dev/null

## Start
timestamp=$(date +%Y%m%d%H%M%S)

if [ ! -e ./to-upload/empty ]; then
echo "creating some files.."
touch ./to-upload/empty
maxA=10
maxB=10
blocksize="1k"
count=1000
A=0
while [ $A -lt $maxA ]; do
let A=A+1 
mkdir ./to-upload/$A
B=0
while [ $B -lt $maxB ]; do
let B=B+1 
dd if=/dev/urandom of=./to-upload/$A/$B.dat bs=$blocksize count=$count 2>/dev/null
done #B
done #A
fi

echo "uploading.."
echo -n > "./zzz_log/report_$timestamp"
(/usr/bin/time -v safe files put ./to-upload/ --recursive ) &>> "./zzz_log/report_$timestamp"

echo "capturing logs.."
mkdir ./zzz_log/$timestamp
cp ~/.safe/vault/baby-fleming-vaults/safe-vault-2/safe_vault.log ./zzz_log/$timestamp/2_safe_vault.log
cp ~/.safe/vault/baby-fleming-vaults/safe-vault-3/safe_vault.log ./zzz_log/$timestamp/3_safe_vault.log
cp ~/.safe/vault/baby-fleming-vaults/safe-vault-4/safe_vault.log ./zzz_log/$timestamp/4_safe_vault.log
cp ~/.safe/vault/baby-fleming-vaults/safe-vault-5/safe_vault.log ./zzz_log/$timestamp/5_safe_vault.log
cp ~/.safe/vault/baby-fleming-vaults/safe-vault-6/safe_vault.log ./zzz_log/$timestamp/6_safe_vault.log
cp ~/.safe/vault/baby-fleming-vaults/safe-vault-7/safe_vault.log ./zzz_log/$timestamp/7_safe_vault.log
cp ~/.safe/vault/baby-fleming-vaults/safe-vault-8/safe_vault.log ./zzz_log/$timestamp/8_safe_vault.log
cp ~/.safe/vault/baby-fleming-vaults/safe-vault-genesis/safe_vault.log ./zzz_log/$timestamp/1_safe_vault.log

echo "done."
exit
9 Likes

Some artificial limit suggested above on the size of one file, tempted uploading a run of 1GB files … but that seems to stall at ~2.1GB in the vaults.

They are relatively quick up to 2.1GB but then since the vault size stop increasing I wondered if it had stalled.
Looking to the system monitor I caught the RAM 100% full and Swap heading to overload but then a change to both ~95%, both heading to 100% as below. That it was so close, I stopped the upload since nothing obviously useful occurring.

1 2

This second attempt sees nicely across 7of8… but that is more unusual it seems.

du -sh ~/.safe/vault/baby-fleming-vaults/* | sed 's#^\([^\t]*\).*/\([^/]*\)#\2\t\1#' | sed 's/genesis/1/' | sort
safe-vault-1	2.1G
safe-vault-2	2.1G
safe-vault-3	2.1G
safe-vault-4	2.1G
safe-vault-5	2.1G
safe-vault-6	2.1G
safe-vault-7	2.1G
safe-vault-8	68K

The first attempt saw the more usual 3of8 vaults fill; so, first pass at this ended as below.

$ du -sh ~/.safe/vault/baby-fleming-vaults/* | sed 's#^\([^\t]*\).*/\([^/]*\)#\2\t\1#' | sed 's/genesis/1/' | sort
safe-vault-1	2.1G
safe-vault-2	2.1G
safe-vault-3	100K
safe-vault-4	2.1G
safe-vault-5	100K
safe-vault-6	100K
safe-vault-7	100K
safe-vault-8	68K

I’ve got copy of the logs from the vaults but the detail in those ends looks to be near the start of the test, rather than anything useful of the work done or end. The vaults are still active after the upload is stopped and it’s evident that they are still alive, with files jumping around like they are being updated in those folders… but the size of them doesn’t change, expect for one [./immutable_data.db] growing very slowly.

The end of vault logs all look like a run of the same messaging:

INFO 2020-03-28T18:34:56.066546552+00:00 [src/vault.rs:335] Node(e91f24..): Not sent message to: 127.0.0.1:60488
INFO 2020-03-28T18:34:56.066558067+00:00 [src/vault.rs:335] Node(e91f24..): Not sent message to: 127.0.0.1:60488
INFO 2020-03-28T18:34:56.066569400+00:00 [src/vault.rs:335] Node(e91f24..): Not sent message to: 127.0.0.1:60488
INFO 2020-03-28T18:34:56.066580778+00:00 [src/vault.rs:335] Node(e91f24..): Not sent message to: 127.0.0.1:60488
INFO 2020-03-28T18:34:56.066591683+00:00 [src/vault.rs:335] Node(e91f24..): Not sent message to: 127.0.0.1:60488
INFO 2020-03-28T18:34:56.066603073+00:00 [src/vault.rs:335] Node(e91f24..): Not sent message to: 127.0.0.1:60488

Is this just an artificial limit near 2.1GB?.. :thinking:

8 Likes

This is all chiming with what i was seeing last week.
I also had to abort a run with large files earlier today.

2 Likes

Been thinking about this… I think I could maybe pull it through with a little help from you guys here.

The challenge is that I’m locked in with my spouse and our 8 month baby, without a possibility to isolate into any spesific room during daytime. I’ve been trying to set up a hobby of folding some paper airplanes, but the fact is that I haven’t been able to follow the guides and fold a single plane without an interruption. And it is damn frustrating to be interrupted during something that you hardly understand and that’s potentially going to jam your computer anyway. So, to try to learn some computer stuff might still be too much, but on the other hand, I might want to have another baby, so I could possibly maybe start inching to that direction… Maybe they’ll get up and running about the same time?

So… I have a laptop with Linux Ubuntu 16.something. Should I update it to the newest Ubuntu? Or would it be good to have the older Ubuntu running - adding some variance to the test pool? If I’m going to run some tests, how to make it so that is gives maximum benefit for the project?

(And remember, it may just prove to be too frustrating to me, and I may quit any moment.)

7 Likes

How much memory and swap do you have?

Run free -h to quickly find out if you dont already know

Ubuntu 16 - most likely you have 16.04 is OK. The upgrade to 18.04 is easy, just make sure you are running of the mains and not battery, then in a few days you can go on to the latest and greatest. I am not sure there is much to be gained from a testing PoV asking you not to upgrade if you wished. And 20.04 will be out soon too…

Disk space is another consideration, how much free space do you have?
df is your friend here, just ignore anything that says loop.

As for jamming your computer, its Linux, its no real big deal, annoying but extremely unlikely to cause any data loss. Assuming you do not have important documents unsaved when you are running tests.

You can start with some simple scripts that set up your vaults and logging for you

See change-nappies/run-new-network.sh at master · willief/change-nappies · GitHub then do some actual testing with change-nappies/test_and_report.sh at master · willief/change-nappies · GitHub

Let us know how you get on, more testers are always welcome :slight_smile:

2 Likes

Unless you’re limited by its being an old laptop, I would upgrade just for the avoidance of complications.

One useful option might be to cycle through the CLI options and test those work as expected.
So, upload a file and see the dog lists the detail of it and that cat can retrieve it.
safe help in the CLI lists a lot of options and not obvious which are complete and which are expected to work. For example I just tried dog and cat … dog worked but cat couldn’t see content.

2 Likes

@davidpbrown gives good advice above.
As for being interrupted, if you use the test scripts and choose a largeish number of runs, its really a case of kicking off the tests and letting it get on with it for anywhere between 5 mins and a couple of hours.

@Toivo there is no need to upgrade, in fact testing in different versions is a bonus, but if you do back up if you can and do it with all apps closed (eg after a clean reboot).

I didn’t shut things down when I upgraded from 19.04 to 19.10 (I think) a few weeks ago and it trashed my hard drive. Luckily I keep backups.

2 Likes

Yes of course, have everything else shut down when doing an distribution upgrade. It’s not difficult, merely time-consuming but its certainly not a multi-tasking job as you put the final touches to your PhD thesis…

Yes, self encryption has a limit in there. easy to move and was added as a control the world thing, in any case it needs fixed. some iteration soon :wink:

11 Likes

Agreed. 16.04 is still supported for another year as it was a LTS (long term support) release
https://wiki.ubuntu.com/Releases

Thanks for the update Maidsafe devs

UX/UI :+1: @JimCollinson

I’m trying but unfortunately here i get stuck at Build
cargo build returns ubuntu@ubuntu:~/safe-api cargo build

Command ‘cargo’ not found, but can be installed with:

sudo apt install cargo

ubuntu@ubuntu:~/safe-api$ sudo apt install cargo
Reading package lists… Done
Building dependency tree
Reading state information… Done
E: Unable to locate package cargo
ubuntu@ubuntu:~/safe-api$ cargo build

Command ‘cargo’ not found, but can be installed with:

sudo apt install cargo

ubuntu@ubuntu:~/safe-api$

I honestly don’t know what I’m doing wrong here. It’s just frustrating to not be able to help with feedback through these updates… :sweat_smile:

5 Likes

OK let me get breakfast and then I’ll try work through this with you

Meantime do

free -h
df -h
lscpu | grep -P 'Model name|^CPU\(s\)'
cat /etc/*-release |grep -P DISTRIB_DESCRIPTION 
uname -mrs

so we know what we are dealing with

Oh and rustc -V

Do you have rust installed?

If not…

sudo apt update

sudo apt install build-essentials ← if you havent already got this
sudo apt upgrade

curl https://sh.rustup.rs -sSf | sh

then choose the stable option let it do its stuff

rustc -V once again should come back with version 1.41.1
(valid end March 2020 for anyone who is reading this in months to come)

8 Likes

You must install rust instead (which includes cargo). Use this command to do it:

  • curl https://sh.rustup.rs -sSf | sh

You also need to install some development tools (I am not sure of current prerequisites list, but they were needed in the past):

  • sudo apt update
  • sudo apt install build-essential
  • sudo apt install pkg-config
  • sudo apt install libssl-dev
9 Likes

My result is more recent: rustc 1.42.0 (b8cedc004 2020-03-09)

Use rustup update to automatically upgrade rust to its latest version.

3 Likes

Hmmmmm I thought that I should install Rust like this:
curl --proto ‘=https’ --tlsv1.2 -sSf https://sh.rustup.rs | sh

I’m not really a fanniliar with cli :sweat_smile:

4 Likes

That’s what you get for guessing…
I’m actually on nightly which gives1.43ish but I thought that perhaps @19eddyjohn75 would be safer with stable. I should really be on stable myself its not like Im doing cutting -edge rust stuff…

1 Like

Its Ok, we’ll soon get you there