NewYearNewNet [04/01/2024 Testnet] [Offline]

this Wallet has pre-unconfirmed tx error is ruining my evening lol

Connecting to the network with 47 peers:
"./\'Weird Al\' Yankovic - Bad Hair Day - The Alternative Polka.mp3" will be made public and linkable
Starting to chunk "./\'Weird Al\' Yankovic - Bad Hair Day - The Alternative Polka.mp3" now.
Uploading 10 chunks
/home/ubuntu/.local/share/safe/client/logs/log_2024-01-04_22-52-36Error:
   0: Failed to upload chunk batch: Transfer Error Failed to send tokens due to Wallet has pre-unconfirmed tx, resent them and try later..

Location:
   sn_cli/src/subcommands/files/mod.rs:330

upload logs

2 Likes

Just WTF is a pre-unconfirmed tx?!?!

3 Likes

iv got 4 non earners out of 860

4 Likes

I think it’s a TX with no valid Parent. You would get that with old cashnotes locally, i.e. from an old testnet. Otherwise there’s a tx not written yet that is the parent of this one. /???Smells like a bug, so parent fails but the client thinks it should be OK and continues trying to spend the children?

5 Likes

OK - thanks

next move is to zap client/wallet/*

check that I get a zero balance with safe wallet balance, visit the faucet again and retry the upload.

EDIT: though before I did that I had a squint at the wallet folder. I have a 25k binary file
-rw-rw-r-- 1 safe safe 25653 Jan 4 21:38 unconfirmed_spend_requests

and here we go again…

3 Likes

uploading and downloading files correctly except for this, possibly some nonsense I don’t understand.

Files upload attempted previously, verifying 0 chunks
All files were already uploaded and verified


  •      Uploaded Files            *
    
  •                                *
    
  • These are not public by default. *
  • Reupload with `-p` option      *
    
  •  to publish the datamaps.      *
    

chunk_manager doesn’t have any verified_files, nor any failed_chunks to re-upload.

1 Like

A few mins into the mega-upload and I now have a new unconfirmed_spend_requests file

Upload performance is significantly faster for now

I changed batch-size from 11 to 20 ← pissing about with prime nos^.
Dunno if that is a factor somehow.

^When I get around to starting nodes on the other cloud box, I will change the delay to be a prime no of seconds - probably 23.

Let me know if that worked.

1 Like

The good news is the safenode appears to have started and not produced an error.

The bad news is you can’t do anything in that terminal anymore! I think your issue there is that running the safenode process has captured the terminal session. If it were Linux I’d say add a ’ &’ after safenode so that the safenode process runs in the background. Maybe it’s the same in Windows land.

Try looking in the log to see if it looks healthy.

3 Likes

Well its not broken yet after ~40 mins

spoke too soon…

safe@NeerdayNetSouthside01:~/safespace$ Logging to directory: "/home/safe/.local/share/safe/client/logs/log_2024-01-04_23-33-47"
Built with git version: a029f27 / main / a029f27
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network                                                                                                                             "mixtral-8x7b-instruct-v0.1.Q6_K.gguf" will be made public and linkable
Starting to chunk "mixtral-8x7b-instruct-v0.1.Q6_K.gguf" now.
Uploading 71392 chunks
⠄ [00:52:46] [>---------------------------------------] 1460/71392                                                                                      client_loop: send disconnect: Broken pipe

I was tailing the logs and dont see anything suspicious at first glance.
Was this a SAFE problem or something between me and Hetzner?
I had three terminals open to that instance, two of them disconnected…
The wallet was last changed at 00:37 - I’ll poke about and see if syslog has anything of interest around that time. <—nothing , unsurprising on a shared cloud instance

auth.log just shows systemd-logind[699]: Session 1 logged out. Waiting for processes to exit.

I’ll save the logs and try again. We had 71392 chunks to go last time.

EDIT:
I get this when I try the upload again

safe@NeerdayNetSouthside01:~$ Logging to directory: "/home/safe/.local/share/safe/client/logs/log_2024-01-05_01-51-20"
Built with git version: a029f27 / main / a029f27
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network with 47 peers:
🔗 Connected to the Network                                                                                                                             "mixtral-8x7b-instruct-v0.1.Q6_K.gguf" will be made public and linkable
Starting to chunk "mixtral-8x7b-instruct-v0.1.Q6_K.gguf" now.
Starting to chunk "mixtral-8x7b-instruct-v0.1.Q6_K.gguf" now.
Files upload attempted previously, verifying 0 chunks
All files were already uploaded and verified
**************************************
*          Uploaded Files            *
**************************************
chunk_manager doesn't have any verified_files, nor any failed_chunks to re-upload.

[1]+  Done                    safe files upload -p --batch-size 20 mixtral-8x7b-instruct-v0.1.Q6_K.gguf

I may zap /home/safe/.local/share/safe/client/ completely and start from scratch.

4 Likes

Reuploads do not work for me.

If I stop an upload with ctrl-c I have to delete the whole Safe directory to be able to upload files again.

I have a feeling logging will have a big impact on large file uploads etc. It may be worth trying with no logging on the client at least?

3 Likes

how can I try without logging on an upload ?

safe files upload -h

Arguments:
  <PATH>  The location of the file(s) to upload

Options:
  -b, --batch-size <BATCH_SIZE>       The batch_size to split chunks into parallel handling batches during payment and upload processing [default: 16]
  -p, --make-public                   Should the file be made accessible to all. (This is irreversible)
  -r, --max-retries <MAX_RETRIES>     The retry_count for retrying failed chunks during payment and upload processing [default: 3]
      --timeout <CONNECTION_TIMEOUT>  The maximum duration to wait for a connection to the network before timing out
  -x, --no-verify                     Prevent verification of data storage on the network
  -h, --help                          Print help (see more with '--help')
ubuntu@oracle:~$

2 Likes

I think you might be trying to upload a file/folder that does not exist. The error handling should be made better for this case. I will put up a quick fix for this.

7 Likes

Can you try setting this env variable, export SN_LOG=none. This should disable all the logs.

3 Likes

Ok going to give that a go now

Would that affect running nodes ?

1 Like

We select one node per droplet as example bootstrap peer for the contact list.

Last time we launched 100 droplet with 20 nodes each, and collected 97 (short of 3 because of ssh issue)

This time we launched 50 droplets with 40 nodes each, and collected 47 (short of 3 due to the same ssh issue)

6 Likes

If you had started the nodes prior to this, they would not be affected.
The variable applies only to the applications that you start after executing export ... And they are reset if you close the session.

4 Likes

It means there is a spend not get confirmed yet.
the code tried to resent it, and waiting for the next run to confirm it got confirmed.

Such error may happen for any spends along with the uploading process.
For each time this error happens, is there any chunk got uploaded in between?

3 Likes

It’s just saying your uploaded file is not public sharable yet.
Have to upload with the -p opt

3 Likes