yup, that is correct
Needles to say, but let us know how it goes! I might try to join later today, if you have any success.
I have not had any issues with port forwarding for any of the last number of tests, I ask because I have not been running many per machine.
10 nodes up, all getting chunks!
The year of testnets continue. YAY
Could you be so kind and post the exact commands how you do it, please? I’d melt away and join the oceans soon after, if I had a node running from home.
(Just the commands for Safe, I can handle the setup of my router myself.)
Sure np.
Assuming you have opened good old 12000 on your router I do this on Linux, I don’t know that there is an equivalent for nohup
on windows.
export SAFE_PEERS="/ip4/178.128.45.252/tcp/32923/p2p/12D3KooWRokYkFYg698Wk1fm7RcDGj4tJ9dsdgDx6FgSypuhm8Pm"
export SN_LOG=all safenode
nohup safenode --log-output-dest data-dir --port=12000 &
Does this seem right?
topi@topi-HP-ProBook-450-G5:~$ nohup safenode --log-output-dest data-dir --port=12000 &
[1] 12319
topi@topi-HP-ProBook-450-G5:~$ nohup: ignoring input and appending output to 'nohup.out'
Do I have any way to make sure if I’m in or not?
The stuff in record_store
folder, they are chunks, right? I have five of them. Got those right in the beginning, but no more after ten minutes.
In some previous versions, months ago, I was getting some chunks, but was not able to really connect. How to tell the difference now?
From logs at some point:
[2023-08-31T13:05:23.982317Z INFO sn_networking::event] Connected peers: 787
I run this script, it gives me a fair idea of what is going on.
(If you run multiple nodes it will display the data for each)
This is what the output looks like.
Timestamp: Thu Aug 31 09:13:43 AM EDT 2023
Node: 12D3KooWPB31RS7gLkRD6grQBEJKHcx3myaQddQT5BKT6GE3Rh3p
PID: 26168
Memory used: 51.8984MB
CPU usage: 3.1%
File descriptors: 1186
IO operations:
rchar: 10851181
wchar: 46466301
syscr: 44243
syscw: 168900
read_bytes: 0
write_bytes: 46739456
cancelled_write_bytes: 0
Threads: 7
Records: 16
Disk usage: 7.4MB
#!/bin/bash
base_dir="$HOME/.local/share/safe/node"
declare -A dir_pid
for dir in "$base_dir"/*; do
if [[ -f "$dir/safenode.pid" ]]; then
dir_name=$(basename "$dir")
dir_pid["$dir_name"]=$(cat "$dir/safenode.pid")
fi
done
for dir in "${!dir_pid[@]}"; do
pid=${dir_pid[$dir]}
echo "------------------------------------------"
echo "Timestamp: $(TZ='America/New_York' date)"
echo "Node: $dir"
echo "PID: $pid"
mem_used=$(ps -o pid,rss -p $pid | awk 'NR>1 {print $2/1024 "MB"}')
echo "Memory used: $mem_used"
cpu_usage=$(ps -p $pid -o %cpu | awk 'NR>1 {print $1"%"}')
echo "CPU usage: $cpu_usage"
file_descriptors=$(ls /proc/$pid/fd/ | wc -l)
echo "File descriptors: $file_descriptors"
echo "IO operations:"
cat /proc/$pid/io
threads=$(ls /proc/$pid/task/ | wc -l)
echo "Threads: $threads"
record_store_dir="$base_dir/$dir/record_store"
if [ -d "$record_store_dir" ]; then
records=$(ls -1 $record_store_dir | wc -l)
echo "Records: $records"
disk_usage=$(du -sh "$record_store_dir" | awk '{print $1}' | sed 's/M/MB/')
echo "Disk usage: $disk_usage"
else
echo "$dir does not contain record_store"
fi
echo
done
echo "------------------------------------------"
Cool, thanks! Seems to be working, but I’m still not sure.
Question: read_bytes
= sending chunks?
Timestamp: to 31.8.2023 09.59.46 -0400
Node: 12D3KooWKpB3KuUviu2UxJKLuGXoArPv7CvW5jmtmbVS2md5a7o2
PID: 12319
Memory used: 53.8047MB
CPU usage: 4.1%
File descriptors: 1662
IO operations:
rchar: 13984051
wchar: 408927581
syscr: 82439
syscw: 320535
read_bytes: 40960
write_bytes: 35508224
cancelled_write_bytes: 0
Threads: 11
Records: 7
Disk usage: 3,0MB
Ok this is a freaky coincidence.
I downloaded this file; not knowing what it was, I called it mah
Then I opened it, assuming it was a picture, and guess what:
What are the bloody chances?
I’ll take it as a good omen for SAFE!
edit; ok not quite like that…turns out the file I was opening is one I already had in the directory I run the command from…still a series of weird coincidences, let’s not get in the way of a good story!
It is PE binary
Are your psychic powers for hire at all?
vdash
it man.
As it often happens, there actually was a logical explanation (see spoiler above)…lol
Is there anyway to have vdash load all the log files from all the different nodes easily without having to specify each path indevidualy?
Is a proper pain now having 30 nodes on a system
Alright, I’ll think about it.
First I need to install Rust, right?
You can pass multiple node logfiles using a wild card path.
So if all your nodes have a common root directory you can use an asterisk in the path instead of the node ID.
As above - failing to upload large files/dirs/no of chunks
willie@gagarin:~/projects/maidsafe/safe_network$ time safe files upload /fgfs/Aircraft/A320-family/
Built with git version: 794fca7 / main / 794fca7
Instantiating a SAFE client...
🔗 Connected to the Network Loaded wallet from "/home/willie/.local/share/safe/client/wallet" with balance Token(199999999184)
Preparing (chunking) files at '/fgfs/Aircraft/A320-family/'...
Making payment for 4608 Chunks that belong to 976 file/s.
Error: Transfer error Not enough balance, 199.999999184 available, 316096398.846066688 required
Caused by:
Not enough balance, 199.999999184 available, 316096398.846066688 required
Location:
sn_cli/src/subcommands/wallet.rs:305:26
real 0m32.759s
user 0m37.371s
sys 0m11.227s
Smaller uploads seem OK, I uploaded and then downloaded a dir wirh ~50 files of tatal size ~8MB without problems.
Failing to get a node running from AWS
My inbound rule is
– sgr-053885758c5104e5a 12000 - 12020 TCP 0.0.0.0/0SAFE port
which has worked OK in the past - only other rule is allowing SSH to my desktop IP only.