I set a 20Gb folder to upload last night before I crashed - then when I got up I discovered the fusebox had tripped and took out all the downstairs sockets - so I dunno if failed uploads still occupy node disk space…
I will keep uploading dirs of sub10Mb files.
I have a wannabe node doing the Retrying after 30 dance and I see a small memory leak - perhaps 1GB/hr max?
THis graph shows ~30mins since power up.
#!/bin/sh
while true
do
filename="$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 16)"
head -c 10M </dev/urandom >"${filename}"
safe files put "${filename}" | tee -a putlog.txt
rm "${filename}"
done
exit 0
seems to be working for me.
BTW, it creates a random 10M file then deletes it after uploading, then makes a new one.
could you put the output of safe into a text file? with append also so that when each of the safe put commands finish they add to that file? is it tee or something?
ou can also use the ‘tee’ command to append output to an existing file rather than overwriting it. To do this, you would use the ‘-a’ option. For example, if you wanted to append the output of the ‘ps’ command to the end of the ‘listing.txt’ file, you would use the following command:
ps | tee -a listing.txt
should I add next to the safe command | or > tee -a output.txt?
ok | tee -a output.txt works!
my updated script based on Tylers is:
#!/bin/sh
while true
do
filename="$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 16)"
head -c 9M </dev/urandom >"${filename}"
safe files put "${filename}" | tee -a safe-output.txt
rm "${filename}"
done
exit 0
Got my big upload running concurrently one script giving each file it’s own container and the other way using the safe sync command but could take a few days before I have any results form the sync operation.
How I would love a progress bar on the safe sync and the safe files put -r commands
I am currently using my version of Tyler’s script and nodes waiting to join in 4 hetzner vps that have 3 cores! lets see if the continues uploads will end the network or more nodes will be able to join!
I gave up on nodes at the moment just trying to fill the existing ones up.
Got a question for anyone from the team
If I split say 10,000 files into 10 directories and then created a container and synced each directory to the same container say 4 directories at a time would that get my round the 1024 register write limit? And posibley speed up the uploading?
A while ago I was only getting the upload timeout about 1/4 of the time, but now it’s with every upload. What’s changed? Maybe too many of us trying to upload to too few nodes - even though there is space, perhaps they are overloaded with requests?
Timeout after 90s when awaiting command ACK from Elders for data address.