Yeh fair point. Let me have a see about best ways to do this
Had a quick download there performance has degraded but I can still download
The speed at which I can upload has also decreased.
Still running thoughā¦
its not done till its done lol
I checked my neglected 5 nodes, and they were sitting idle with disk full because of logs.
Not sure how the network handles that, but most likely that would cause problems. Anyhow, cleaned now.
BTW, this all feels old news nowā¦I WANT MY PERSONAL safeAI ALREADY!!
What does this mean?
safe files upload -p '/media/testnet/Evenmorefiles/Periphyseon'
Logging to directory: "/home/testnet/.local/share/safe/client/logs/log_2024-02-27_14-39-42"
Built with git version: 7d0e17c / main / 7d0e17c
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 48 peers
š Connected to the Network "/media/testnet/Evenmorefiles/Periphyseon" will be made public and linkable
Starting to chunk "/media/testnet/Evenmorefiles/Periphyseon" now.
Uploading 6863 chunks
ā [00:11:59] [>---------------------------------------] 1/6863 Upload terminated due to un-recoverable error Err(SequentialUploadPaymentError)
Error:
0: Failed to upload chunk batch: Too many sequential upload payment failures
Location:
sn_cli/src/subcommands/files/upload.rs:295
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
Thatās what weāve seen when the network is in trouble and you canāt get a payment made so you canāt upload a file. So eventually it errors.
Iām now prepared to admit the network is in trouble! Files of just bytes are failing to upload and files of 1MB are sometimes failing to download.
If I had the resources I would throw a few thousand nodes at the network (staggered, not all at once) to see if it can recover and if it does whether there has been data loss. But that would get expensive real quick!
When you start a node, you can tell it how many logs you want to keep.
Just add these parameters and customize the number:
--max_log_files=5 --max_archived_log_files=10
I hope we can see LLMās as an additional thing nodes could run in the future. Along with smart contracts. Anything we can add that will use SNT will improve the networks long term viability.
I uploaded all my standard files at launch of this testnet. I initially was able to download all but the really large 2GB file, but even then I was able to get 1.8GB of it, so I read that as the problem with uploading and not getting all the chunks saved error problem.
Yesterday though as a test of the durability of my chunks I redownloaded everything again. This time two files failed - one being the 2GB file of course but it was only able to retrieve ~80MB of the 2GB file. I used a batch size of 2 for all of this as my connection isnāt fast and itās helped in the past. So it appears as though the network has lost considerable chunkage over time, or maybe the process erred out. I will try again tonight.
I think it may well be on its last legs. Our nodes are considerably over the max records kept, with some pushing 4k. So I suspect we might be hitting data loss if thereās insufficient space.
As such I think weāll be bringing down our nodes later today.
My plan to add more nodes is on ice my service provider is now asking lots of questions about a bulk order for more vpsās
Like what?
For an explanation of my use case for a bulk order.
Iv given a vague explanation about encrypted distributed data storage so waiting to see what they come back with.
Yeah back in comnet days everytime I wanted more instances DO would ask me to explain why before they would bump me up by a few.
Frustrating.
Canāt wait to f#*k these big tech companies off. October isnāt it
Thatās them coming down.
Twas a good couple o weeks there.
Thanks everyone for getting stuck in! A lot gleaned from this one
Next one planned will have basic bad node detection.
Are the test nets Ipv4, Ipv6 or both?
Is it going to be on 28th Match, as announced in Roadmap, or sooner?