If you grep the log file before it is deleted then the quote amount is in there. For the exercise it is rather useless other than watching a quote of 5 x 10^11 attos. IE billions or trillions of files per token
I tested it on a 4 core intel SBC
Set the rate accordingly. Maybe 15 seconds might work better
I am testing my script to upload 10MiB files (4 chunks incl datamap) and looking good. I am checking the address for the data map is extracted correctly
Then we can upload these test random files and a simple change to teh script to down load then.
By your morning it should be done
[EDIT: Uploader is being updated due to issue with too much variability with ant file upload to remove overlapping. Please scroll down for updated script when it is ready.]
Here is the script to upload 10 MiB random files to the network. Its slow since āant file uploadā command is slow. It takes like 300 to 500 seconds to complete.
Requirements to run
A funded wallet. At this time 1 ANT token is enough to upload trillions of such files. But the gas will be more significant at this time. I suggest funding the wallet with say 1 ant token and 0.002 to 0.01 ARB-ETH
The script will create the 10MiB semi random file if it doesnāt exist. This file is made up of text of mostly random numbers. For each file being uploaded a random header is prepended to the semi random file to make sure all files are unique.
The script writes (appends) the datamap address to a file called datamap.list
This datamap.list is to allow another script to exercise the downloading aspect of the network. People can add the datamap list to a post and a āgloabalā datamap.list file created containing all of them. We can even add datamap addresses from other public files that have been uploaded elsewhere by other people.
Suggest using the same directory as you used for the quote-exercise
#
#
# Script to upload small files (expect 3 chunks + datamap) to random nodes
#
# upload-exercise -r 1.234
#
# It generates a semi random data file that is small being less than 2000 bytes and has a standard file
# appended to it making it over 10MB and less than 12MB. This is then uploaded to the network and the
# datamap address is appened to a liting file for a download-exercise script to exercise downloads.
# This enables real testing of the network bandwidth and nodes. In a next level of exercising the network.
#
# Just one machine running this script will have little impact, but the more people that run it the
# better stress testing that will occur. It is not a be all end all stress test but one component,
# with another being the quoting of random files.
#
# The script will perform uploads at the rate requested by putting into the background the upload.
# Thus there will be overlapping uploadss occurring at the same time.
#
# This requires the filename used to be different for each quote requested and obviously
# needs cleaning up afterwards
#
#
#################
#
# parameters
# -r secs - rate in decimal seconds
# Each upload will be issued every (approx) seconds after the previous one
# Uploads are sent as a background task and output sent to /dev/null and the datamap
# addresses appended to the datamap list
#
#
#
#
#################
#
# NOTE: in addition to 3 chunks being quoted on there is a minimum of 5 nodes for each
# chunk that are asked to give quotes. IE a minimum of 15 nodes will be asked
# for a quote. Then 3 x 3 (9) nodes will be sent the chunks
#
# This will generate a lot more traffic across the network than is being experienced
# with near zero requests happening in this genesis network
#
#
#
########################################################################################################
# Some declarations and ensure sub directories are there
########################################################################################################
declare -i i j k uploadCnt
fillerFile="filler-file"
if [[ ! -d tmp ]]; then mkdir tmp; fi
########################################################################################################
# Check for rate of quoting - too fast and system will start falling over itself
########################################################################################################
if [[ ! -z $1 ]] && [[ ${1} == "-r" ]]
then
if [[ ! -z $2 ]] && ( [[ ${2} =~ ^[1-9][0-9]*[.]*[0-9]*$ ]] || [[ ${2} =~ ^0.[0-9]*$ ]] )
then
quoteRate="${2}"
else
echo "ERROR: invalid quoting rate supplied. Decimal number required" >&2
exit
fi
else
quoteRate="120.0"
fi
########################################################################################################
# Check the filler file has been setup, if not then make a semi random text file 10MiB long
########################################################################################################
if [[ ! -s $fillerFile ]]
then
echo "Building the 10MiB dummy file with semi random text. This will take a little time ...."
# make a string of 128 chars including NL being semi random
# duplicate 10 times making string 1280 bytes, much quicker to write out 1280 bytes than 128 bytes
# do this 8192 times writing that to a file making 10MiB file (binary sizing of course)
# NOTE: while nano seconds is fairly random the time between look is milliseconds so the first 3 digits
# is not random enough and thus are excluded to increase randomness - yea this is semi random anyhow
# so meh, and is way more than enough for this purpose since no randomness is essential for filler-file
# just nice to do it.
for ((i=0;i<8192;i++))
do
j=0; linex1="$( date +%N )"; linex1="${linex1:3}"; while (( j < 127 )); do linex1="${linex1}-FILLERFILE-$RANDOM$SECONDS$RANDOM$RANDOM"; j="${#linex1}"; done; linex1="${linex1:0:127}\n";
linex10="${linex1}${linex1}${linex1}${linex1}${linex1}${linex1}${linex1}${linex1}${linex1}${linex1}"
echo -n -e "${linex10}" >> $fillerFile
done
fi
########################################################################################################
########################################################################################################
#
# FUNCTION: do quote and clean up after itself. Meant to run in the background
#
########################################################################################################
########################################################################################################
#
# parameters
# 1 - quoteNo - Sequiental number supplied to keep runs separated
#
function doUpload {
declare -i i j k
if [[ -z $1 ]]; then echo "ERROR: error in parameters to uploading function ($1)" >&2; exit; fi
quoteFile="tmp/${1}.file"
logFile="tmp/${1}upload.logs"
# ensure files do not exist from a previos aborted run
rm -f $quoteFile $logFile
# make a semi random set of text
echo "$SECONDS $( date ) $RANDOM $RANDOM" > ${quoteFile}.header
echo "$(( $RANDOM * $RANDOM )) $( date +%s%N ) $RANDOM" >> ${quoteFile}.header
cat ${quoteFile}.header $fillerFile >> $quoteFile
# use client to request a quote and direct all output to temp file
ant --log-output-dest stdout file upload -p -x $quoteFile > $logFile 2>&1
read iat iaddr imapAddr ignore <<< $( grep -i "^at address: " $logFile )
if (( ${#imapAddr} == 64 ))
then
echo "$imapAddr" >> datamap.list
fi
# cleanup
rm -f $quoteFile ${quoteFile}.header $logFile
# exit background task
exit
}
########################################################################################################
########################################################################################################
#
# MAIN PROGRAM
#
########################################################################################################
########################################################################################################
uploadCnt="0"
while (( 1 == 1 ))
do
uploadCnt="$(( uploadCnt + 1 ))"
echo -e -n "\r$uploadCnt: "
doUpload $uploadCnt >/dev/null &
sleep $quoteRate
done
Last script for now is the download exercise script.
./download-exercise.bash -r 1.234
Uses a file datamap.list to get datamap addresses to download, one file address per line. This file is generated by the upload-exercise.bash script.
Alternatively the datamap.list file can be obtained from anther person or place. Can be their uploads only of many can be combined from many peopleās uploads.
download-exercise.zip (1.7 KB)
Again just extract to the directory created for any of the other scripts above
#
#
# Script to download files using the datamap addresses contained in datamap.list
#
# download-exercise -r 1.234
#
# This is the complementry script to the upload-exercise.bash Script
#
# It uses the datamap.list file to request files from teh network. These should be already uploaded public
# files uploaded by the person or a list obtained from a central source
#
# Uses the ant client to do the download and discards the download after its complete. This is not
# a script to get you files but to exercise the downloading capabilities of the network and provide
# some stresses to the nodes. Not much since the genesis network os doing sod all and 10 million
# nodes at the time of writing.
#
# Just one machine running this script will have little impact, but the more people that run it the
# better stress testing that will occur. It is not a be all end all stress test but one component,
# with another being the quoting of random files and also uploading of random files.
#
# The script will perform downloads at the rate requested by putting into the background the download.
# Thus there will be overlapping downloads occurring at the same time.
#
# This requires the filename used to be different for each download requested and obviously
# needs cleaning up afterwards
#
#
#################
#
# parameters
# -r secs - rate in decimal seconds
# Each download will be issued every (approx) seconds after the previous one
# Downloads are performed as a background task and output sent to /dev/null
#
#
########################################################################################################
# Some declarations and ensure sub directories are there
########################################################################################################
declare -i i j k downloadCnt
datamapFile="datamap.list"
if [[ ! -d tmp ]]; then mkdir tmp; fi
########################################################################################################
# Check for rate of quoting - too fast and system will start falling over itself
########################################################################################################
if [[ ! -z $1 ]] && [[ ${1} == "-r" ]]
then
if [[ ! -z $2 ]] && ( [[ ${2} =~ ^[1-9][0-9]*[.]*[0-9]*$ ]] || [[ ${2} =~ ^0.[0-9]*$ ]] )
then
downloadRate="${2}"
else
echo "ERROR: invalid download rate supplied. Decimal number required" >&2
exit
fi
else
downloadRate="120.0"
fi
########################################################################################################
########################################################################################################
#
# FUNCTION: do download and clean up after itself. Meant to run in the background
#
########################################################################################################
########################################################################################################
#
# parameters
# 1 - downloadNo - Sequiental number supplied to keep runs separated
# 2 - datamap addresses
#
function doDownload {
declare -i i j k
if [[ -z $2 ]] || (( ${#2} != 64 )); then echo "ERROR: error in parameters to download function ($1/$2)" >&2; exit; fi
if [[ -z $1 ]]; then echo "ERROR: error in parameters to download function ($1/$2)" >&2; exit; fi
downloadDir="tmp/${1}.file"
logFile="tmp/${1}download.logs"
# ensure files do not exist from a previos aborted run
rm -f $logFile
rm -f -r $downloadDir
# use client to request a quote and direct all output to temp file
ant --log-output-dest stdout file download $2 $downloadDir > $logFile 2>&1
# cleanup
rm -f $logFile
rm -f -r $downloadDir
# exit background task
exit
}
########################################################################################################
########################################################################################################
#
# MAIN PROGRAM
#
########################################################################################################
########################################################################################################
downloadCnt="0"
while read idataAddr ignore
do
if (( ${#idataAddr} != 64 )); then continue; fi
downloadCnt="$(( downloadCnt + 1 ))"
echo -e -n "\r$downloadCnt: $idataAddr "
doDownload $downloadCnt $idataAddr >/dev/null &
sleep $downloadRate
done < $datamapFile
echo "Finished processing the datamap list."
echo "Delaying 100 seconds finishing up to allow the last download to complete"
echo "Safe to ctrl-c now to exit"
sleep 100
From my testing of the scripts it seems the uploading script will be able to upload on the order of 10ās of millions of the 10MiB files for 1 ANT token and 0.01 ARB-ETH EDIT: seems that fees are quite variable and when i check that it must have been super cheap. Its not significantly higher and maybe 200-500 files for 0.01 ARB-ETH
Now 1 million of these 10MiB files uploaded will require a minimum of 50 TiB of storage across 10 million nodes. Or in terms of records 15 million chunks of approx 3.3 MiB and 5 million chunks of approx 2 KiB
That is a drop in the bucket for a network with 10 million nodes. This is a maximum of 2 records per node and when we hit 100 million nodes in a few months its 1 record per 5 nodes. And at 1 billion nodes its 1 in 50 nodes.
Realistically that is almost impossible number to reach with the few testers and length of time to upload. Even 10,000 files would take months to reach, so we are not in danger of spending too much or even filling the future network too much.
Advantage of doing this is that we exercise the network and make it more real.
EDIT: ant file upload is allowing me to upload 10 files an hour. Typically there is 3 overlapping files being uploaded. Some take 500 seconds and others 1/2 hour or more. Crazy [EDIT2: updated uploads per hour]
I am going to update the upload script. Last night it showed signs of extreme variability of upload speeds, and the overlapping was causing cascading, with perhaps 10 or so uploads at once even at 600 seconds gap and this seemed to also cause a high failure rate. That wastes time and tokens/eth
So this morning I am making the rate the maximum rate and uploads sequential only. No overlaps. If your computer is capable then make a 2nd directory and run a 2nd upload script in the 2nd directory and that way you have 2 uploads running in parallel. No cascading effects
Also after checking the blockchain it seems the ARB-ETH fee per file is around 0.0000081739 ETH ($0.02).
I am stuck on the ant file upload command that will not finish for me any more.
I am wanting an example of the output from a successful public file upload please.
I cannot complete the new script until I have the expected results
EDIT: Please a successful upload
Im getting this
Logging to directory: "/home/worker/.local/share/autonomi/client/logs/log_2025-02-23_03-28-49"
š Connected to the Network Uploading data to network...
Encrypting file: "neo-download.zip"..
Successfully encrypted file: "neo-download.zip"
Paying for 3 chunks..
1640: Error:
0: Failed to upload dir and archive
1: Failed to upload file
2: Error occurred during payment.
3: Cost error: MarketPriceError(ContractError(TransportError(ErrorResp(ErrorPayload { code: -32000, message: "execution reverted", data: None }))))
4: Market price error: ContractError(TransportError(ErrorResp(ErrorPayload { code: -32000, message: "execution reverted", data: None })))
5: server returned an error response: error code -32000: execution reverted
Location:
ant-cli/src/commands/file.rs:77
and lots of thisā¦
[2025-02-23T03:23:47.734959Z DEBUG ant_bootstrap::cache_store 263] Peer not found in cache to update: /ip4/132.145.147.163/udp/56685/quic-v1/p2p/12D3KooWKfKTPHiy5ig5te9gSXHJL83nZVbA2r7qw1ggtUqvdyWe
[2025-02-23T03:23:47.735495Z ERROR autonomi::client 221] Failed to dial addr=/ip4/37.27.63.189/udp/51134/quic-v1/p2p/12D3KooWC8v7zNms2DV8f4MDJEYyFy3jFj2pZd2jGtwXzdUTaghx with err: DialError(DialPeerConditionFalse(NotDialing))
[2025-02-23T03:23:47.736486Z DEBUG ant_bootstrap::cache_store 259] Updating addr status: /ip4/65.21.204.30/udp/60333/quic-v1/p2p/12D3KooW9tT6cMDRJQxAk2apyzoR6i1WF43c6YfaSY94v8EBvMu6 (success: true)
[2025-02-23T03:23:47.736498Z DEBUG ant_bootstrap::cache_store 263] Peer not found in cache to update: /ip4/65.21.204.30/udp/60333/quic-v1/p2p/12D3KooW9tT6cMDRJQxAk2apyzoR6i1WF43c6YfaSY94v8EBvMu6
[2025-02-23T03:23:47.736714Z DEBUG ant_bootstrap::cache_store 259] Updating addr status: /ip4/95.216.9.60/udp/17226/quic-v1/p2p/12D3KooWJcgtMrfnAQYxz5UtzU6RiVPgQKNXy5E7JZEcqYxzsxUV (success: true)
[2025-02-23T03:23:47.736721Z DEBUG ant_bootstrap::cache_store 263] Peer not found in cache to update: /ip4/95.216.9.60/udp/17226/quic-v1/p2p/12D3KooWJcgtMrfnAQYxz5UtzU6RiVPgQKNXy5E7JZEcqYxzsxUV
[2025-02-23T03:23:47.737418Z ERROR autonomi::client 221] Failed to dial addr=/ip4/213.136.68.81/udp/42504/quic-v1/p2p/12D3KooWEKt8pyxUuh1bhpKJAd2Ko9k6zDCeZkJyimZun3nnbSWJ with err: DialError(DialPeerConditionFalse(NotDialing))
[2025-02-23T03:23:47.738169Z ERROR autonomi::client 221] Failed to dial addr=/ip4/167.86.82.232/udp/11004/quic-v1/p2p/12D3KooWDmSqN9jAHkbZ1shSHRrRX6wfEwfAwFACDbSwmnNStWj4 with err: DialError(DialPeerConditionFalse(NotDialing))
[2025-02-23T03:23:47.738194Z ERROR autonomi::client 221] Failed to dial addr=/ip4/138.68.153.103/udp/37732/quic-v1/p2p/12D3KooWNQNtKU7inSdQ9ZNr89MKJ5gYvK:
Sorry Iāll keep trying
Well it took a while >10 mins for 64k but eventually completed
willie@gagarin:~/projects/maidsafe/formicaio$ ant file upload formicaio.db-shm
Logging to directory: "/home/willie/.local/share/autonomi/client/logs/log_2025-02-23_03-34-36"
š Connected to the Network Uploading data to network...
Encrypting file: "formicaio.db-shm"..
Successfully encrypted file: "formicaio.db-shm"
Paying for 3 chunks..
Uploading file: formicaio.db-shm (3 chunks)..
Successfully uploaded formicaio.db-shm (3 chunks)
Upload of 1 files completed in 286.142208148s
Uploading private archive referencing 1 files
Successfully uploaded: formicaio.db-shm
At address: 18313822053406796643
Number of chunks uploaded: 6
Number of chunks already paid/uploaded: 0
Total cost: 55883751444 AttoTokens
Hope that helps
EDIT Ah shit -you wanted a successful public uploadā¦this may take a wee while
Try this
willie@gagarin:~/projects/maidsafe/formicaio$ ant file upload -p formicaio.db-shm
Logging to directory: "/home/willie/.local/share/autonomi/client/logs/log_2025-02-23_04-01-34"
š Connected to the Network Uploading data to network...
Encrypting file: "formicaio.db-shm"..
Successfully encrypted file: "formicaio.db-shm"
Paying for 4 chunks..
Uploading file: formicaio.db-shm (4 chunks)..
Successfully uploaded formicaio.db-shm (4 chunks) to: 01bac83a06300991daed3ce69616146704f450df148b4e97c6f0d7f3e34a12a1
Upload of 1 files completed in 315.018929482s
Uploading public archive referencing 1 files
Successfully uploaded: formicaio.db-shm
At address: 210fc3b2ae05d2b7a856b9b89c430cea22e8f7d94041b4ccc3c10698a2fc97a8
Number of chunks uploaded: 5
Number of chunks already paid/uploaded: 3
Total cost: 27940966020 AttoTokens
Funny how when I add the -p flag it becomes 4 chunks and then we get
Number of chunks uploaded: 5
but on the non public upload it was
Number of chunks uploaded: 6
Iāll zip up the logs in case anyone wants to look at them @chriso ?
Remember the public file has the datamap chunk it uploads.
Thankyou for the output. I had to double check the key phrase I want is the
At address:
I was correct in the code. The line with
Successfully uploaded filename (n chunks) to :
would come out and then hung before the At address: line and it threw me thinking it maybe the datamap address
Yes the number of chunks uploaded is strange. I had 6 data chunks for a 3 chunk file. Odd indeed
Testing again.
The first upload went through in about 120 seconds.
ALL others have the client filling up the log file with all this connecting to nodes entries as if the client it scanning the whole network one node at a time
There has to be a bug somewhere
@Southside
After stuffing around with upload for many hours waiting and waiting and waiting I have decided to go back to overlapping uploads, but limit them to 3.
The method I am using is to take the rate (seconds) for starting uploads and multiply by 3 to come up with a timeout value
So if you choose 200 seconds (default) then each upload has a maximum of 600 seconds to complete. I found that most uploads of 3/4 chunks will complete in that time or will just sit there wasting time and resources and eventually come back with failure. Typically 45 minutes to an hour waiting only to say it failed. So far this is guaranteed to happen. Maybe others will find 800 seconds is that spot, then they can choose 266 seconds.
Its the best at the moment till they fix the connection issues causing this stupidity of wasting time to eventually say it failed.
If people want to try more parallel uploads then run the script in 2 directories or change the 3 to 4 or 5 which extends the time.
upload-exercise.zip (2.9 KB)
Rather than waste forum reading space please download and cat the file after unzipping to ensure it is not doing nasty things.
Run it by
./upload-exercise.bash -r 250
I suggest using a rate of 250 or more. Uploads have to complete in 3 times the rate or else the upload is considered already failed and it just doesnāt realise it yet.
I added a count down timer till next upload is started so that you can know the upload script is still alive
See this post for details on the download exercise script Testing - #45 by neo
I have a datamap.list file to go with it.
If you already have one, then post it in this topic first then append this list to it.
If you do not have one then this list has 425 files you can start off with to exercise the network
unzip this (approx 420 file addr) datamap.zip (15.9 KB)
EDIT: updated list of 512 file addresses and no, not intended to be 512
just where I got too before gas kill me
datamap.zip (19.1 KB)
And in case you donāt want to go back to the post for the downloading script here it is
download-exercise.zip (1.7 KB)
I am not after anything other than getting as many people as possible involved in this network exercising and testing. That list cost real money 40AUD so please make that 40AUD proud and join in the quote/upload/downloading exercising. I am still uploading test files so keep an eye out for a new datamap,list file coming to a topic near you
Unzip to which directory?
The directory you create for this testing. Anywhere you have permissions to do so. I just created one under my hone directory
At this time I would suggest people only be runing the quotes and download scripts.
And only run the upload script by having very small amount of ARB-ETH and top it up every so often, The upload script wonāt have problems if the arb-eth runs out before you top it up. That way you will not get cleaned out if the gas skyrockets for a few minutes. Maybe enough gas for 20 to 50 uploads (40 to 100 cents worth). Multiply your rate by number of files your gas should do and that is how often you would need to top up. EG at a rate of 10 minutes per file its 200 to 500 minutes for 20 to 50 files worth of gas.
worker@noderunner01:~/neo-testing$ fg
./download-exercise.bash -r 1.234
512: ffc40e65e76ec2d27e76e9fdea56c1a70ef05ce4679797058162bc8b9bfa2513 Finished processing the datamap list.
Delaying 100 seconds finishing up to allow the last download to complete
Safe to ctrl-c now to exit
worker@noderunner01:~/neo-testing$ ^C
Any reason why I should not put this on a loop?
Do it.
I left it as a once only and left it up to the person running it to decide.
Yea, just encase the script call inside a while forever loop
while (( 1 == 1 )); do ./download-exercise.bash -r 20; done
And it should just overwrite the same files in /tmp so no need to explicitly clean that out each run?
Or you can include rm -r tmp inbetween runs
while (( 1 == 1 )); do ./download-exercise.bash -r 20; sleep 200; rm -r tmp; done