Don’t laugh, but I wrote something!
Now that I’ve obtained a few nanos from the node I hadn’t linked to my Discord ID I can run safe files estimate
and resume what I was doing before - tracking the cost of uploading a single record.
This time I’m logging it in a comma delimited file with the eventual aim of logging directly into a database and publishing from there.
I’m logging the time in seconds since the epoch as that will be more flexible later.
I’m logging the start time and the end time of getting the quote because that varies and might be useful to track.
I’d welcome any suggestions on how to improve it!
# https://safenetforum.org/u/storage_guy/
# v1.2
# Description
# This script:-
## creates a small file of random data just under the minimum size of a record in a temp directory
## uploads the file to safenetwork
## logs the time of the command execution start and finish
#
# It does this indefinitely
#
# There is a pause of 60 seconds after each execution.
# This is not quite the same as getting 1 esimate a minute as the time to get an estimate varies.
# Sometimes it's 20 seconds and sometimes more than a minute.
# Getting a quote at exact regular intervals would be impossible to arrange.
# The purpose is to track the cost of storing a single record over time.
# and also to track the time to get a quote as it varies a lot and can indicate network health.
#
# It's a different file every time and in a small network the cost of file uploads will vary a lot.
# Over enough time though it will provide an indication of how the cost changes over time.
# Either intentionally or by design you can't get an estimate without a balance in the client wallet!
# It can be minimally small
#
# 0.000000001 will do
# Instructions
# Run inside screen or as a background task.
# ToDo
# Have the output go directly into a simple database
# Changelog
# v1.2
# Changed estimate output to output just:-
## time of command start in seconds since the epoch
## Transfer cost
## time of command end in seconds since the epoch
#
# Removed the logging of the start and ending of the script
# v1.1
# Changed 'upload' to 'estimate' in the safe command as there is no need to upload the data to get the cost.
#
# Removed the line to rm everything in /tmp because chunk artifacts aren't stored there now
#
# Added a line to rm everything in /home/ubuntu/.local/share/safe/client/chunk_artifacts
# as that is where chunk artifacts are stored and otherwise it fills the disk.
# v1.0
# Initial release
# Based on safe_upload_stressor_1.8
# Declare which directories to use.
TEMP_DIR=$HOME/safe_cost_tracker/temp
LOG_DIR=$HOME/safe_cost_tracker/logs
# Declare which files to use
TEMP_FILE=temp_file_for_estimate
LOG_FILE=safe_cost_tracker.log
# Create the directories
mkdir -pv $TEMP_DIR $LOG_DIR
# Remove files in temp dir in case script terminated early last time
rm -f $TEMP_DIR/*
# Loop
while [ 0 ]
do
# Create the file
echo 'Creating temp_file_for_estimate'
dd bs=1024 count=400 </dev/urandom > $TEMP_DIR/$TEMP_FILE
# Log the time
# Get a cost estimate for the upload of the file
# Log the time again
# Output to a file
echo 'Getting an estimate for uploading the file' $TEMP_DIR/$TEMP_FILE
(date +%s && echo ',' && safe files estimate $TEMP_DIR/$TEMP_FILE | grep Transfer | sed 's/Transfer cost estimate: //' && echo ',' && date +%s) | tr -d '\n' >> $LOG_DIR/$LOG_FILE ; echo >> $LOG_DIR/$LOG_FILE
# Cleanup for next run
echo 'Cleaning up for next run'
rm $TEMP_DIR/*
rm -rf /home/ubuntu/.local/share/safe/client/chunk_artifacts/*
# Sleep 60 seconds
echo 'Sleeping for 60 seconds'
echo
sleep 60
done