Long post follows… from a fun couple of hours playing
So, I’ve made some progress on the back of this update, understanding some of what errors follow from different uploads.
Below, the detail of three flavours of trigger.
A. ERRORs from uploading the same file more than once
B. ERRORs from LARGE files 1GB
C. ERRORs from HUGE files 3GB+
A. ERRORs from uploading the same file more than once
#AccessDenied is caused by uploading the same file more than once.
The FilesContainer is new each time but the detail of the file itself is barfing that AccessDenied error below.
# upload
FilesContainer created at: "safe://hnyynywzeag1rj869d4xpom8jpnamoejosdy9w85rx1e7nrr1rqhwt6paobnc"
E ./to-upload/file.dat <[Error] NetDataError - Failed to PUT Published ImmutableData: CoreError(Self-encryption error: Storage error: Data error -> Access denied - CoreError::SelfEncryption -> Storage(SEStorageError(Data error -> Access denied - CoreError::DataError -> AccessDenied)))>
B. ERRORs from LARGE files 1GB
#Three different errors tonight on notionally the same upload
[2020-03-26T22:06:09Z ERROR safe] safe-cli error: Failed to connect: [Error] InvalidInput - Failed to decode the credentials: InvalidMsg
[2020-03-26T22:18:59Z ERROR safe] safe-cli error: [Error] NetDataError - Failed to PUT Sequenced Append Only Data: CoreError(RequestTimeout - CoreError::RequestTimeout)
Third attempt worked fine
- ./to-upload/file.dat safe://hbkyyodicekb4febtk3j7c7r9rky4wxprwbz898m71c7imh9qiozrpu7k7
I haven’t time to do this over again but expect to see the same again trying the same… I’m tired and didn’t see anything that should have prompted the InvalidInput instance
but note the one that succeeded likely did so only for being quicker than the RequestTimeout error
So, RequestTimeout, saw times of
real 9m31.551s
user 4m59.597s
sys 0m38.782s
where the successful upload, was just quicker at
real 8m15.798s
user 5m11.232s
sys 0m37.689s
1of3 attempt at 1GB =>
############
file: 1
size: 977M
# upload
[2020-03-26T22:06:09Z ERROR safe] safe-cli error: Failed to connect: [Error] InvalidInput - Failed to decode the credentials: InvalidMsg
real 0m0.009s
user 0m0.000s
sys 0m0.005s
# vault size
safe-vault-1 188K
safe-vault-2 188K
safe-vault-3 180K
safe-vault-4 124K
safe-vault-5 124K
safe-vault-6 68K
safe-vault-7 100K
safe-vault-8 100K
2of3 attempt at 1GB =>
############
file: 1
size: 977M
# upload
[2020-03-26T22:18:59Z ERROR safe] safe-cli error: [Error] NetDataError - Failed to PUT Sequenced Append Only Data: CoreError(RequestTimeout - CoreError::RequestTimeout)
real 9m31.551s
user 4m59.597s
sys 0m38.782s
# vault size
safe-vault-1 882M
safe-vault-2 871M
safe-vault-3 863M
safe-vault-4 879M
safe-vault-5 880M
safe-vault-6 875M
safe-vault-7 881M
safe-vault-8 68K
3of3 attempt at 1GB =>
############
file: 1
size: 977M
# upload
FilesContainer created at: "safe://hnyynywmnewjdqyqq1oetc5o577gok6irqs95zi9ryqceiprmm1xpkgemobnc"
+ ./to-upload/file.dat safe://hbkyyodicekb4febtk3j7c7r9rky4wxprwbz898m71c7imh9qiozrpu7k7
real 8m15.798s
user 5m11.232s
sys 0m37.689s
# vault size
safe-vault-1 981M
safe-vault-2 981M
safe-vault-3 100K
safe-vault-4 100K
safe-vault-5 981M
safe-vault-6 100K
safe-vault-7 100K
safe-vault-8 68K
############
C. ERRORs from HUGE files 3GB+
#Repeatable error
The script I use creates a delay at the start, which is the simple creation of the random data file but beyond that this huge file attempt, fails quickly as below, with the vaults hardly starting to fill and watching the System Monitor, it is obvious the RAM and Swap are chewed through at a rate of knots before it fails.
attempt at 3GB =>
############
file: 1
size: 2.9G
# upload
e[0me[31mWell, this is embarrassing.
safe-cli had a problem and crashed. To help us diagnose the problem you can send us a crash report.
We have generated a report file at "/tmp/report-325577f4-6d7e-44fe-ac47-e9ceb4e77f7d.toml". Submit an issue or email with the subject of "safe-cli Crash Report" and include the report as an attachment.
- Authors: bochaco <gabrielviganotti@gmail.com>, Josh Wilson <joshuef@gmail.com>, Calum Craig <calum.craig@maidsafe.net>, Chris O'Neil <chris.oneil@gmail.com>
We take privacy seriously, and do not perform any automated error collection. In order to improve the software, we rely on people to submit reports.
Thank you kindly!
e[0m[2020-03-26T22:36:01Z ERROR safe] safe-cli error: [Error] Unexpected - Failed to retrieve account's public BLS key: Unexpected (probably a logic error): send failed because receiver is gone
real 0m29.455s
user 0m13.479s
sys 0m5.309s
# vault size
safe-vault-1 168K
safe-vault-2 164K
safe-vault-3 100K
safe-vault-4 160K
safe-vault-5 100K
safe-vault-6 100K
safe-vault-7 100K
safe-vault-8 68K
Potentially the errors are relative to file size and RAM (it seems to me the swap is just swamped after the RAM fills, where that oversized file triggers that)
So, above was on
CPU(s): 4
Model name: Intel(R) Core™ i3-7100U CPU @ 2.40GHz
16254420 K total memory
10239996 K total swap
===================================
“Method”
My method is roughly noted below, and the script I use is below that.
# https://github.com/maidsafe/safe-api/blob/master/safe-cli/README.md#description
#rough:
#START HERE FRESH
./install.sh
safe auth install
safe vault install
#OR START HERE UPGRADE
safe update
safe auth update
safe vault install
#START-VAULTS
safe vault run-baby-fleming
safe auth start
safe auth create-acc --test-coins
#phrase/word
safe auth login
#phrase/word
safe
auth subscribe
#another terminal
safe auth
#back to subscribed terminal
#Enter and copy auth allow
auth allow 9999999999
## stop with
safe vault killall
safe auth stop
# remove all vaults =folders in ~/.safe/vault/baby-fleming-vaults/
GOTO START-VAULTS
Simple bash script for uploads
#change count=6000
for different size of file
#change while [ $COUNTER -lt 10 ]; do
for different number of attempts
#!/bin/bash
#Simple script to upload COUNTER x SIZE files and log result
## Setup
#Expects safe baby-fleming to be setup and running
mkdir ./zzz_log 2>/dev/null
mkdir ./to-upload 2>/dev/null
## Base state
#log base state
echo "### START" > ./zzz_log/report
date >> ./zzz_log/report
lscpu | grep -P 'Model name|^CPU\(s\)' >> ./zzz_log/report
vmstat -s | grep -P 'total memory|total swap' >> ./zzz_log/report
echo "# initial vault size" >> ./zzz_log/report
du -sh ~/.safe/vault/baby-fleming-vaults/* | sed 's#^\([^\t]*\).*/\([^/]*\)#\2\t\1#' | sed 's/genesis/1/' | sort >> ./zzz_log/report
## Standard file creation here
#dd if=/dev/urandom of=./to-upload/file.dat bs=1k count=6000 2>/dev/null
### CAUSES ERROR: Using that line above atm causes error=
### E ./to-upload/file.dat <[Error] NetDataError - Failed to PUT Published ImmutableData: CoreError(Self-encryption error: Storage error: Data error -> Access denied - CoreError::SelfEncryption -> Storage(SEStorageError(Data error -> Access denied - CoreError::DataError -> AccessDenied)))>
## Start
COUNTER=0
while [ $COUNTER -lt 10 ]; do
let COUNTER=COUNTER+1
## non standard file creation here
dd if=/dev/urandom of=./to-upload/file.dat bs=1k count=6000 2>/dev/null
echo "file: "$COUNTER
echo "############" >> ./zzz_log/report
echo "file: "$COUNTER >> ./zzz_log/report
echo "size: "$(ls -hs ./to-upload/file.dat | sed 's/^\([^ ]*\).*/\1/') >> ./zzz_log/report
echo "# upload" >> ./zzz_log/report
(time safe files put ./to-upload/file.dat ) &>> ./zzz_log/report
echo >> ./zzz_log/report
echo "# vault size" >> ./zzz_log/report
du -sh ~/.safe/vault/baby-fleming-vaults/* | sed 's#^\([^\t]*\).*/\([^/]*\)#\2\t\1#' | sed 's/genesis/1/' | sort >> ./zzz_log/report
echo "upload: "$COUNTER" complete"
done
date >> ./zzz_log/report
echo "### END" >> ./zzz_log/report
## Summary pivot
echo -ne "\tfile:\t0\tsize: 0\t#\t\t\t\treal\t0\tuser\t0\tsys\t0\t\t" > ./zzz_log/summary_table_report; tail -n +7 ./zzz_log/report | tr '\n' '@' | sed 's/############/\n/g' | sed 's/@/\t/g' | sed 's/file: /file:\t/' >> ./zzz_log/summary_table_report
exit