Need to change the OP then.
Things have moved on since - we now have 0.90.35.
safe@ClientImprovementNet-Southside01:~/.local/share/safe$ safeup node
**************************************
* *
* Installing safenode *
* *
**************************************
Installing safenode for x86_64-unknown-linux-musl at /home/safe/.local/bin...
Retrieving latest version for safenode...
Installing safenode version 0.90.35...
[########################################] 6.69 MiB/6.69 MiB safenode 0.90.35 is now available at /home/safe/.local/bin/safenode
I am trying different -c and -batch values by uploading the exact same file again and again. I get different results, but is this maybe stupid as I guess failed to fetch... is not an option once the file has been uploades succesfully?
EDIT:
failed to fetch... is very much happening with too high batches. But why actually? Aren’t the new chunks the same as the old ones, and then in a same location and deduplicated?
Hey @TylerAbeoJordan, you got some sweet spot of -c and -batch in the last testnet, didn’t you? What was it, and for what file size? Any findings on this testnet?
How about anyone else?
I am uploading a 59,7MB file again and again, and it succeeds every time. So far the best performance has been:
-c 200 --batch-size 20
real 6m57,601s
user 5m14,048s
sys 0m32,794s
And recently I got to repay some chunks with:
-c 200 --batch-size 40
real 13m24,288s
user 9m56,370s
sys 0m48,284s
It seems to me that large batch sizes fail much more easily than smaller ones. The effect of concurrency is more unclear to me.
Higher -c is better and lower -batch-size is better … don’t go lower or higher on both - they need to be inverse to each other to maximize effect.
Unclear how high -c you can go and still have an effect, maybe 30, 50, 100? I don’t know. But higher is better and uses more CPU.
-batch-size – what worked consistently for me last time was 5 or lower. I’ve thought this was due to my particular internet limits, but not so sure now. I think it’d be interesting to hear the experiences of those with really high throughput connections - i.e. what batch-size could they consistently run with and not drop chunks.
wow, I would have thought that’d clear the hurdle easy. With small files may as well go with max -c anyway as even though a more CPU, it’s only for an instant and not continuous.
edit: I should say small uploads to be more clear.
As I understand it -c is concurrency - I think of it as the amount of parallelism that will be attempted.
higher is faster but more error-prone
batch-size - how many chunks should be sent to the network in any one operation
lower is slower but seemingly more reliable in my experience on RAM-constrained cheapo cloud instances.