[Offline] Another day another testnet

I’m now uploading a file on repeat now. Should help fill it up

#!/bin/sh
while true
do
safe files put “my-randomly-selected-file”
done
exit 0

Of course it never exits, you have to kill it or it will die on error.

Doh, just realized that won’t work … deduplication! I’ll have to add some bits … BRB.

3 Likes

Was about to say… :slightly_smiling_face:

Actually you uploaded 90 files ! I got it all very fast :smiley:

3 Likes

Has anyone joined?

3 Likes

yes 89 plus the script that generates my random files :stuck_out_tongue:

2 Likes

I set a 20Gb folder to upload last night before I crashed - then when I got up I discovered the fusebox had tripped and took out all the downstairs sockets - so I dunno if failed uploads still occupy node disk space…

I will keep uploading dirs of sub10Mb files.

I have a wannabe node doing the Retrying after 30 dance and I see a small memory leak - perhaps 1GB/hr max?
THis graph shows ~30mins since power up.

3 Likes

Okay, here’s the new version:

#!/bin/sh
while true
do
filename="$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 16)"
head -c 10M </dev/urandom >"${filename}"
safe files put "${filename}" | tee -a putlog.txt
rm "${filename}"
done
exit 0

seems to be working for me.

BTW, it creates a random 10M file then deletes it after uploading, then makes a new one.

4 Likes

could you put the output of safe into a text file? with append also so that when each of the safe put commands finish they add to that file? is it tee or something?

ou can also use the ‘tee’ command to append output to an existing file rather than overwriting it. To do this, you would use the ‘-a’ option. For example, if you wanted to append the output of the ‘ps’ command to the end of the ‘listing.txt’ file, you would use the following command:
ps | tee -a listing.txt

should I add next to the safe command | or > tee -a output.txt?

ok | tee -a output.txt works!

my updated script based on Tylers is:

#!/bin/sh
while true
do
filename="$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 16)"

head -c 9M </dev/urandom >"${filename}"

safe files put "${filename}" | tee -a safe-output.txt
rm "${filename}"
done
exit 0
2 Likes

should we all run that script to fill up the nodes and force the network to accept more nodes?

1 Like

Guessing it will help nodes join … but I wonder how long before they will want to restart the network. Maybe will run another day or two?

2 Likes

got my first upload error:

<ClientError: Timeout after 90s when awaiting command ACK from Elders for data address b5c15a..>

3 Likes

Got my big upload running concurrently one script giving each file it’s own container and the other way using the safe sync command but could take a few days before I have any results form the sync operation.

How I would love a progress bar on the safe sync and the safe files put -r commands

4 Likes

I am currently using my version of Tyler’s script and nodes waiting to join in 4 hetzner vps that have 3 cores! lets see if the continues uploads will end the network or more nodes will be able to join!

3 Likes

I gave up on nodes at the moment just trying to fill the existing ones up.

Got a question for anyone from the team

If I split say 10,000 files into 10 directories and then created a container and synced each directory to the same container say 4 directories at a time would that get my round the 1024 register write limit? And posibley speed up the uploading?

1 Like

Might be beer’oclock on Saturday in Scotland now. Might be waiting a while. :rofl:

3 Likes

trying now double the script in every vps so 8 upload random 9m file scripts running!

2 Likes

It certainly is beer o’clock in Scotland right now :wink:

8 Likes

its going good so I am gonna put a third script in every vps! :stuck_out_tongue: :tada:

3 Likes

all my nodes trying to join get this message every 30 seconds:
Encountered a timeout while trying to join the network. Retrying after 30 seconds.

is it normal? why timeouts and not “the network is not accepting nodes”?

3 Likes

A while ago I was only getting the upload timeout about 1/4 of the time, but now it’s with every upload. What’s changed? Maybe too many of us trying to upload to too few nodes - even though there is space, perhaps they are overloaded with requests?

Timeout after 90s when awaiting command ACK from Elders for data address.

3 Likes