Safe Cost Tracker

Don’t laugh, but I wrote something!

Now that I’ve obtained a few nanos from the node I hadn’t linked to my Discord ID I can run safe files estimate and resume what I was doing before - tracking the cost of uploading a single record.

This time I’m logging it in a comma delimited file with the eventual aim of logging directly into a database and publishing from there.

I’m logging the time in seconds since the epoch as that will be more flexible later.

I’m logging the start time and the end time of getting the quote because that varies and might be useful to track.

I’d welcome any suggestions on how to improve it!

# https://safenetforum.org/u/storage_guy/

# v1.2

# Description

# This script:-
## creates a small file of random data just under the minimum size of a record in a temp directory
## uploads the file to safenetwork
## logs the time of the command execution start and finish
#
# It does this indefinitely
#
# There is a pause of 60 seconds after each execution.
# This is not quite the same as getting 1 esimate a minute as the time to get an estimate varies.
# Sometimes it's 20 seconds and sometimes more than a minute.
# Getting a quote at exact regular intervals would be impossible to arrange.

# The purpose is to track the cost of storing a single record over time.
# and also to track the time to get a quote as it varies a lot and can indicate network health.
#
# It's a different file every time and in a small network the cost of file uploads will vary a lot.
# Over enough time though it will provide an indication of how the cost changes over time.


# Either intentionally or by design you can't get an estimate without a balance in the client wallet!
# It can be minimally small
#
# 0.000000001 will do


# Instructions

# Run inside screen or as a background task.


# ToDo
# Have the output go directly into a simple database


# Changelog

# v1.2
# Changed estimate output to output just:-
## time of command start in seconds since the epoch
## Transfer cost
## time of command end in seconds since the epoch
#
# Removed the logging of the start and ending of the script


# v1.1
# Changed 'upload' to 'estimate' in the safe command as there is no need to upload the data to get the cost.
#
# Removed the line to rm everything in /tmp because chunk artifacts aren't stored there now
# 
# Added a line to rm everything in /home/ubuntu/.local/share/safe/client/chunk_artifacts
# as that is where chunk artifacts are stored and otherwise it fills the disk.


# v1.0
# Initial release
# Based on safe_upload_stressor_1.8


# Declare which directories to use.
TEMP_DIR=$HOME/safe_cost_tracker/temp
LOG_DIR=$HOME/safe_cost_tracker/logs

# Declare which files to use
TEMP_FILE=temp_file_for_estimate
LOG_FILE=safe_cost_tracker.log

# Create the directories
mkdir -pv $TEMP_DIR $LOG_DIR

# Remove files in temp dir in case script terminated early last time
rm -f $TEMP_DIR/*

# Loop
while [ 0 ]

do

        # Create the file
        echo 'Creating temp_file_for_estimate'
        dd bs=1024 count=400 </dev/urandom > $TEMP_DIR/$TEMP_FILE

	# Log the time
        # Get a cost estimate for the upload of the file
        # Log the time again
        # Output to a file
        echo 'Getting an estimate for uploading the file' $TEMP_DIR/$TEMP_FILE
	(date +%s && echo ',' && safe files estimate $TEMP_DIR/$TEMP_FILE | grep Transfer | sed 's/Transfer cost estimate: //' && echo ',' && date +%s) | tr -d '\n' >> $LOG_DIR/$LOG_FILE ; echo >> $LOG_DIR/$LOG_FILE

        # Cleanup for next run
        echo 'Cleaning up for next run'
	rm $TEMP_DIR/*
	rm -rf /home/ubuntu/.local/share/safe/client/chunk_artifacts/*

	# Sleep 60 seconds
	echo 'Sleeping for 60 seconds'
	echo
	sleep 60

done
10 Likes

Now I look at the output I see there are a some crazy prices again!

1723971897,0.000291286,1723971936

I’m wondering if instead of the file being random each time and we have no idea of what it was whether it would be helpful to procedurally generate it with a seed and save the seed so that we can get a quote for it again and see if it was a one off or not?

Supposedly there is a fix coming for that.

4 Likes

Another things you can do is use $SECONDS
timestart=$SECONDS
elapsedtime=$(( $SECONDS - timestart ))

that way you can date just once and use the $(( $SECONDS - timestart )) as the elapsed time saving mental calcs when viewing

And similar for this
thistime=$SECONDS
… do stuff
sleep $(( 60 - ( $SECONDS - thistime ) )) # or $(( 60 - $SECONDS + thistime )) depending on your flavour of seeing maths

Maybe for he sleep you might need to check elapsed time isn’t over 60 seconds. sleeping back in time doesn’t work well

3 Likes

This is actually a feature not a bug, outsiders with no skin in the game should not even get a 1 nano attention 4 their inquiry…

2 Likes

no its a bug. I could have one billion tokens, yet use other accounts and one might not have anything, but I need an estimate for some reason. Artificial restrictions based off what someone feels should restrict others is a bad way to set features

1 Like

You can’t even read a Register without nanos - there is an issue for that so I hope it will be fixed in the next update.

3 Likes

In space without a spacesuit, us humans we are nothing. This is not a artificial restriction, just a reality check of your environment. It’s not much to ask for somebody to farm or buy that 1 nano, that way they experience what others in the community had to go through to interact with the Network.

When you put such restrictions in because of someones version of reality, it is nothing like your spacesuit analogy. This is just an artificial restriction that resulted from a bug.

People will just edit the code to fix the bug.

But you are just trying to place a easily bypassed road block.

Also you restrict legit users with a lot of skin in the game too.

The network is supposed to be inclusive, not exclusive to those in the “club” of having nanos in that particular account they are currently using.

And why do you restrict those planning on uploading yet don’t restrict those viewing data? Mixed standards

3 Likes

Good points, @neo.

However as the resident threat analyst, is there not a risk of DOS by a team of bad actors all firing in many estimate requests within a few seconds of each other?

2 Likes

Yes, there is some threat. The smaller the network the worse it is.

I wonder if nodes have a limit to the number of quote requests per second they will allow to be processed.

As far as putting in an artificial restriction such as the bug, it will do no good since an attacker will just use a modded client to bypass that.

4 Likes

I can see some absolutely crackers quotes in the log. Here is the top 20:-

awk -F ',' {'print $2'} safe_cost_tracker.log | sort -n | tail -n 20
0.624276521
0.738305257
0.839330431
0.942143903
1.109142227
1.493282570
1.640507611
1.749133460
1.870441247
2.448832922
2.448832935
2.610735860
3.258308126
3.340281948
4.218507220
4.913173468
4.918344676
33.169590896
66.647502518
348.530937373
5 Likes

A new version. I’ve changed the file of random data from being a 400KB file created with ‘dd’ to being a number between 1 and 1000000000000000. This will still produce a minimally sized record. And to log the number so that the estimate can be obtained again in the future.

The next thing is to have the logging directly into a database. InfluxDB might be a good choice?

# https://safenetforum.org/u/storage_guy/

# v1.4

# Description

# This script:-
## creates a small file of random data which will produce a minimum sized chunk.
## uploads the file to safenetwork
## logs the time of the command execution start and finish
#
# It does this indefinitely
#
# There is a pause of 60 seconds after each execution.
# This is not quite the same as getting 1 estimate a minute as the time to get an estimate varies.
# Sometimes it's 20 seconds and sometimes more than a minute.
# Getting a quote at exact regular intervals would be impossible to arrange.

# The purpose is to track the cost of storing a single record over time.
# and also to track the time to get a quote as it varies a lot and can indicate network health.
#
# It's a different file every time and in a small network the cost of file uploads will vary a lot.
# Over enough time though it will provide an indication of how the cost changes over time.


# Either intentionally or by design you can't get an estimate without a balance in the client wallet!
# It can be minimally small
#
# 0.000000001 will do


# Instructions

# Run inside screen or as a background task.


# ToDo
# Have the output go directly into a simple database


# Changelog
#
# v1.4
# Changed the random file to be a random number between 1 and 1000000000000000.
# And log the number so that the quote can be checked again in the future.


# v1.3
# Added delete of Safe Client logs because the logs will eventually fill the disk


# v1.2
# Changed estimate output to output just:-
## time of command start in seconds since the epoch
## Transfer cost
## time of command end in seconds since the epoch
#
# Removed the logging of the start and ending of the script


# v1.1
# Changed 'upload' to 'estimate' in the safe command as there is no need to upload the data to get the cost.
#
# Removed the line to rm everything in /tmp because chunk artifacts aren't stored there now
# 
# Added a line to rm everything in /home/ubuntu/.local/share/safe/client/chunk_artifacts
# as that is where chunk artifacts are stored and otherwise it fills the disk.


# v1.0
# Initial release
# Based on safe_upload_stressor_1.8


# Declare which directories to use.
TEMP_DIR=$HOME/safe_cost_tracker/temp
LOG_DIR=$HOME/safe_cost_tracker/logs
SAFE_DIR=$HOME/.local/share/safe/client

# Declare which files to use
TEMP_FILE=temp_file_for_estimate
LOG_FILE=safe_cost_tracker.log

# Create the directories
mkdir -pv $TEMP_DIR $LOG_DIR

# Remove files in temp dir in case script terminated early last time
rm -f $TEMP_DIR/*

# Loop
while [ 0 ]

do
        date
        ## Create the file
        echo 'Creating temp_file_for_estimate'
        RANDOM_NUMBER=$(shuf -i 1-1000000000000000 -n 1)
        echo $RANDOM_NUMBER > $TEMP_DIR/$TEMP_FILE

        # Log the time
        # Get a cost estimate for the upload of the file
        # Log the time again
        # Log the number of which the random file is composed
        # Output the above to a file
        echo 'Getting an estimate for uploading the file' $TEMP_DIR/$TEMP_FILE
	(date +%s && echo ',' && safe files estimate $TEMP_DIR/$TEMP_FILE | grep Transfer | sed 's/Transfer cost estimate: //' && echo ',' && date +%s && echo ',' && echo $RANDOM_NUMBER) | tr -d '\n' >> $LOG_DIR/$LOG_FILE ; echo >> $LOG_DIR/$LOG_FILE

        # Cleanup for next run
        echo 'Cleaning up for next run'
	rm $TEMP_DIR/*
	rm -rf $SAFE_DIR/chunk_artifacts/*
	rm -rf $SAFE_DIR/logs/*

	# Sleep 60 seconds
	echo 'Sleeping for 60 seconds'
	echo
	sleep 60

done
2 Likes

16 bytes, does that create 3 chunks? I know 18 bytes does.

If 3 bytes then I found duplication in chunks and ditched it because of the inefficiencies of uploading 3 chunks only to have the next number to have 1 chunk the same and the next and the next. Effectively this resulted in only 2 chunks of quotes per number

3 Likes

I think so because I get 3 16 byte files:-

[quote="neo, post:14, topic:40234"]
16 bytes, does that create 3 chunks? I know 18 bytes does.
[/quote]

But now that I think about it properly what about if the ‘shuf’ command I’m using produces a really small number ? Like 1 for example. Then the ‘safe files estimate’ command is going to fail with:-

   0: Failed to chunk file "temp2"/PathXorName("90975467090b8a34e20734b09afdb82d13ea6436ec3a4c6d95ab45b3e6703b70") with err: Chunks(FileTooSmall)

Doh!

To fix this I just need to change the command to produce a number in the range of 1000000000000000 to 2000000000000000

I used a number (incrementing from 1) and did a MD5 on it. Checked - Unique chunks for the first 5 million numbers and I suspect for a hell of a lot more

2 Likes

I cleared the file and started it again for the new network. I’m starting to see some large costs. Here are the top 20:-

cat safe_cost_tracker/logs/safe_cost_tracker.log | sort -t , -k 2,2 | tail -20
1726667321,0.000000108,1726667326,2372548058506004
1726824951,0.000000112,1726824960,2674616550697290
1726787084,0.000000121,1726787095,3487563912296365
1726846713,0.000000123,1726846725,5124962989154958
1726731550,0.000000133,1726731569,4063029694213631
1726727842,0.000000135,1726727850,8705663363433050
1726664444,0.000000141,1726664449,3568369570798542
1726929893,0.000000164,1726929899,3387390831466083
1726640059,0.000000228,1726640070,5275716349188588
1726897577,0.000000239,1726897589,3655049340802158
1726853061,0.000000250,1726853070,9863478758575252
1726908618,0.000000255,1726908630,6257351070240576
1726911225,0.000000260,1726911241,5793101831355500
1726840453,0.000000299,1726840459,2794786832013557
1726933153,0.000000374,1726933167,3086324267914480
1726821632,0.000000447,1726821643,6171856544400239
1726852995,0.000000841,1726853001,3238304751880459
1726913872,0.000001186,1726913883,7866697666943870
1726945644,0.000001532,1726945654,5504880344495458
1726929959,0.000001670,1726929964,5258562050317383
1 Like

Almost immediately I got one at 1360 nanos

safe@wave1-bigbox:~$ cat safe_cost_tracker/logs/safe_cost_tracker.log | sort -t , -k 2,2 | tail -20
1726957102,0.000000003,1726957106,791506314684830
1726957411,0.000000003,1726957420,972327120742026
1726957519,0.000000003,1726957528,342112393623888
1726957652,0.000000003,1726957656,171966394984325
1726957716,0.000000003,1726957718,661521432007356
1726957588,0.000000006,1726957592,129200881210283
1726957778,0.000000006,1726957786,196783272866681
1726957846,0.000000011,1726957855,469584322265939
1726957915,0.000001360,1726957930,595296359024965
1 Like

I think the store cost algo needs some tweaking. It seems to go from low cost to max cost over too small a range of “fullness”.

It is basically centred around 1/2 full but the low/high points are too close to 1/2 full in my opinion. In effect a small amount of fullness will skyrocket the store cost when the network is close to 1/2 full

4 Likes

As this network will be winding up soon it’s not useful going forward but here is my log of the costs from the script for the history of the network until a few hours ago:-

(I had to put a coulpe of kisses at the end to get a cost that wouldn’t cost more than the nanos I have!)

safe files download safe_cost_tracker.log_20240928 ef1582bcd8be4b20c916871b896eb9e959cb65911004a59f7852c899c0366c6b

1.1MB

md5sum safe_cost_tracker.log_20240928
b07bed0bc141a3eac310858f382172e3  safe_cost_tracker.log_20240928

Here are the top 20 costs from it:-

sort -t , -k 2,2 safe_cost_tracker.log_20240928 | tail -20
1727365009,0.000009444,1727365023,3253869006043887
1727316534,0.000009447,1727316544,5272232298482764
1727469094,0.000009758,1727469126,5233150975069363
1727328470,0.000009884,1727328478,7787904890168776
1727471670,0.000010193,1727471678,7533339715286705
1727502847,0.000010640,1727502862,4577094585690279
1727485333,0.000010756,1727485340,9005808615781199
1727294611,0.000010861,1727294618,9268224078620824
1727478172,0.000011243,1727478178,8305386063907563
1727272384,0.000011372,1727272400,2589576190453444
1727499790,0.000011660,1727499801,3777500244433842
1727470129,0.000013912,1727470154,2762295166236591
1727510014,0.000014119,1727510037,3914324541498022
1727348853,0.000014270,1727348868,9673345067644968
1727502031,0.000014310,1727502041,1512082787287934
1727505759,0.000016037,1727505774,8354481707168760
1727504656,0.000018796,1727504667,4907337247054642
1727489155,0.000020746,1727489169,8533923622326910
1727473808,0.000021362,1727473823,3581027577004158
1727508749,0.000021420,1727508764,1033838197514618

I’m still running the script and the costs are still going up. The highest so far is 0.000941378.

3 Likes