ClientImprovementNet [22/09/23 Testnet] [Offline]

@chriso : I have a problem updating the client. My connection is pretty busy but I’m not sure if that’s relevant to this issue.

$ safeup client --version 0.82.3
**************************************
*                                    *
*          Installing safe           *
*                                    *
**************************************
Installing safe for x86_64-unknown-linux-musl at /home/mrh/.local/bin...
  [########################################] 9.22 MiB/9.22 MiB                                                                                                                  Error: Text file busy (os error 26)

Location:
    src/install.rs:198:39

Scroll left to see: Error: Text file busy (os error 26)

Maybe there’s a file locked after the earlier error which might be due to the connection being quite busy:

$ safeup client --version 0.82.3
**************************************
*                                    *
*          Installing safe           *
*                                    *
**************************************
Installing safe for x86_64-unknown-linux-musl at /home/mrh/.local/bin...
Error: error sending request for url (https://sn-cli.s3.eu-west-2.amazonaws.com/safe-0.82.3-x86_64-unknown-linux-musl.tar.gz): error trying to connect: dns error: failed to lookup address information: Try again

Caused by:
   0: error trying to connect: dns error: failed to lookup address information: Try again
   1: dns error: failed to lookup address information: Try again
   2: failed to lookup address information: Try again

Location:
    src/s3.rs:43:28

How to clear this?

EDIT: An hour or so later it went through fine. I’m not the only one to have this issue though so might be worth looking into. See: @storage_guy’s post.

1 Like

Ah interesting. I reprod but have only two failing.

I wonder if we’re under some replication at the moment… I’m going to ask our nodes at least to see if i can see anything odd.

edit: maidsafe nodes are at least not very full, ~25% on avg. So nothing should have been deleted.

4 Likes

I don’t have a chance to try the new client at the moment but when I first tried to join the testnet I was getting that ‘Error: Text file busy (os error 26)’ error when trying to download safenode 0.90.33. But strangely only on an AWS based Instance. The node I was trying to update at home was able to get the file. I tried multiple times with both to be sure. I didn’t mention it at the time because we rapidly moved on to using 0.90.34 which downloaded fine.

Version 0.82.3 installs and works fine for me
:clap:

4 Likes

I so I can see that the price was requestted from some nodes, but never stored there.

Only one maidsafe node had the chunk e8f7b6477b3118294b7d9db0d0effb84c855afa1913d393ab62efdfa9c35fb7f put. And it’s also attempting to return it as far as I can see (at least it was this morning).

I see also see it attempted to init replication (i think)… not sure what happened there. Digging in. But most likely there’s a bug in some part here. :male_detective:

9 Likes

Jackpot :money_mouth_face: :moneybag:

9 Likes

Something I’m seeing with a lot going on on my laptop, namely a large safe upload, a large safe download and an ssh session showing vdash on a cloud node.

@joshuef As well as slowing down loading in the browser, I get quite a few DNS failures reported both in the browser and in the CLI including when doing safeup or getting peers from AWS. Repeating gets through eventually but I wonder if we need some kind of configurable or perhaps automatic throttling?

3 Likes

That’s your concurrency -c arg there at the moment. You should be able to limit throughput up/down using that.

Anything automatic around that is probably very “nice to have” for later though.

2 Likes

Thanks. This could be a tricky one because if users try the client out and find it causes browsing and other problems they’ll see it as buggy or getting in the way and drop it.

Even with more guidance in the --help most users won’t find that, so maybe more cautious defaults will be needed unless it can be made adaptive.

1 Like

Thanks for the info. I’ll look into this later.

Nothing jumps out as immediately obvious though.

1 Like

#MeToo etc etc

safe wallet get-faucet 139.59.175.64:8000

Built with git version: e8392d1 / main / e8392d1

Requesting token for wallet address: b7f01105de6e3611a18d751840f2a542b4859ca3408423c40efbe77ea9a934f12ddbd35926e994487500e8557df5c5b7...

Successfully stored cash_note to wallet dir.

Old balance: 100.000000000

New balance: 200.000000000

Successfully got tokens from faucet.
% safe files upload ddkdk/dd
Built with git version: e8392d1 / main / e8392d1
Instantiating a SAFE client...
🔗 Connected to the Network                                                                                                                                                                                                                                                                                                                      Total number of chunks to be stored: 133
Error: 
   0: Transfer Error Offline transfer creation error Not enough balance, 0.000000000 available, 0.008458832 required.
   1: Offline transfer creation error Not enough balance, 0.000000000 available, 0.008458832 required
   2: Not enough balance, 0.000000000 available, 0.008458832 required

Location:
   sn_cli/src/subcommands/files.rs:195

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

1 Like

I screwed up last night attaching another 20Gb storage to my cloud instance and ran out of disk space

which is sad. Rather than mess about, I will shut down the remaining running nodes one at a time every few mins. I have a total of 2.1Gb chunks stored so a fair bit replication may ensue.
Then I will set up another cloud instance with lvm this time so I can more easily attach extra storage as required.

3 Likes

I downloaded again the file that gave me the error and this time instead of two chunks only one is missing. :man_shrugging:

Chunks error Not all chunks were retrieved, expected 750, retrieved 749, missing [f84701(11111000)…]…

2 Likes

I know this is another cheeky one but since i wiped my SAFE_PEERS my latest nodes were not working but if you upgrade to safeup node --version 0.90.38 the also work fine without the SAFE_PEERS being set :slight_smile:

1 Like

I’ve set a laaaarge upload (a directory of video files) going again (still using client defaults) and it has been sitting at this point for at least an hour:

Connected to the Network                                             
Total number of chunks to be stored: 53458

What’s happening in terms of activity and network traffic here (I’m not seeing much up or down and have a separate large download going which probably accounts for most of it)? There’s no progress indication which would be nice.

The log contains a lot of OutgoingConnectionError errors and seems to have stopped activity a long time ago so I suspect that up until that message it was just doing local preparation, and then could not access the network.

It may help to have both progress indication and a timeout at that point.

EDIT:
I stopped the download to see if that was affecting this very large upload but it isn’t.

After restarting the upload it hasn’t got past Connected to the Network before, looking at the logs it is now full of OutgoingConnectionError messages again, after the initial success showing it connected to 182 peers.

So my network isn’t loaded at all now but connectivity for the client seems to fail after a few minutes.

I’ll some smaller uploads and report back…

  • 44k file, 4 chunks to upload completed in about 3 mins including repaying for 2 chunks
  • 18M file, 36 chunks to upload … gave Cannot get store cost for NetworkAddress errors for 5 of the first batch of 20 chunks and continued to upload the final 16 chunks which gave no errors. After two phases of repaying it completed successfully in 6mins.

Verification gave mixed messages. First it said failed to fetch for 7 chunks, then it says “verified 36 chunks” which I guess means it attempted to verify but might be better saying “verified 29 of 36 chunks”:

After 88.051157424s, verified 36 chunks
======= Verification: 7 chunks were not stored. Repaying them in batches. =============
Failed to fetch 7 chunks. Attempting to repay them.
  • 270MB file, 524 chunks to upload is proceeding in batches of 20… during batch at 40 chunks I’m seeing OutgoingConnectionError in the log but mixed with Final fees calculated as: so I guess it has enough peers to proceed.

During batch 60 chunks the terminal gave one Cannot get store cost for NetworkAddress but the log shows it is also proceding ok.

For batch 80 chunks same, 100 same, 120 same, 140 same, 160 two failures, 180 one failure, 200 one failure, 220 two failures, 240 one failure, 260 zero failures… EDIT: as soon as I stopped watching the failures stopped! Then one failure at 340. One at 380. One at 420. One at 480. One at 500. :man_shrugging:

Verification: 32 (of 524) chunks not stored, repaying… in the end it failed:

[snip]
Uploaded chunk #e58b4e.. in 1 seconds
Failed to fetch 12 chunks. Attempting to repay them.
Cannot get store cost for NetworkAddress::ChunkAddress(f1bc07(11110001).. -  - f1bc07e1e17f697f998e92b6300eeeaec5556d8f43ec332d7254e6a8d032e275) with error CouldNotSendMoney("Network Error Not enough store cost prices returned from the network to ensure a valid fee is paid.")
Transfers applied locally
All transfers completed in 35.006110548s
Total payment: NanoTokens(19730894) nano tokens for 11 chunks
Uploaded chunk #2de77d.. in 1 seconds
Uploaded chunk #5f4999.. in 1 seconds
Uploaded chunk #1b44f4.. in 1 seconds
Uploaded chunk #a239b1.. in 1 seconds
Uploaded chunk #94e702.. in 1 seconds
Uploaded chunk #8960e3.. in 1 seconds
Uploaded chunk #b07b00.. in 1 seconds
Uploaded chunk #be21b7.. in 1 seconds
Uploaded chunk #20c9bc.. in 0 seconds
Uploaded chunk #35a8be.. in 0 seconds
Uploaded chunk #d8d7a4.. in 0 seconds
Error: 
   0: Chunks error Failed to get find payment for record: f1bc07e1e17f697f998e92b6300eeeaec5556d8f43ec332d7254e6a8d032e275.
   1: Failed to get find payment for record: f1bc07e1e17f697f998e92b6300eeeaec5556d8f43ec332d7254e6a8d032e275

Location:
   sn_cli/src/subcommands/files.rs:297

So for each batch of twenty chunks it shows one Cannot get store cost for NetworkAddress error except for the first two batches which went ok and the occasional double error or zero errors. That seems a bit too regular to be a network only issue so maybe the client has an issue here?

For my very large upload (50k chunks) is it possible that the client loses connection to the peers and for a very large upload and then fails to progress?

6 Likes

latest client safe 0.82.6 also passes the BegBlag test :confetti_ball: :clap: :+1: :slight_smile:

safe@ClientImprovementNet-Southside01:~/.local/share/safe/node$ unset SAFE_PEERS 
safe@ClientImprovementNet-Southside01:~/.local/share/safe/node$ safe files download BegBlag.mps 80744b3d25bab269cab54e8baccf4f54f1aa01615230b99171bc3576c1ca7230
Built with git version: d730d29 / main / d730d29
Instantiating a SAFE client...
Trying to fetch the bootstrap peers from https://sn-testnet.s3.eu-west-2.amazonaws.com/network-contacts
Connecting to the network w/peers: ["/ip4/142.93.232.219/tcp/38095/p2p/12D3KooWLAX6Z1m5gNxPGZQRV6VEzCtipNGcP1YhrAYNYT3yq5mv", "/ip4/64.227.158.176/tcp/34893/p2p/12D3KooWG3cHz8aM9Zf2Gyar7NBb2BZ2wfcBf1zY7PHuUtn3EkvL", "/ip4/147.182.237.224/tcp/46429/p2p/12D3KooWFCbRRe6GoyTvgbVqDK7VnD3TwEsM2876pBzuB1G3KHKj", "/ip4/139.59.125.187/tcp/33641/p2p/12D3KooWNUNgbhC2zEUNVJzn72BxD4ooZ7kTE4NSa3pCmcGobGQj", "/ip4/142.93.76.173/tcp/41717/p2p/12D3KooWJ51eYsdyksy87PazS3M7wRVdPNJWfH58u8fxBk2Y9hdp", "/ip4/142.93.76.173/tcp/46229/p2p/12D3KooWQi1eeRbwtRWRvYTSQ2gSKt6M8QjseS65Hwzb7PDGKghE"]...
🔗 Connected to the Network                                                                                                                 Downloading BegBlag.mps from 80744b3d25bab269cab54e8baccf4f54f1aa01615230b99171bc3576c1ca7230
Client download progress 1/31
Client download progress 2/31
Client download progress 3/31
Client download progress 4/31
Client download progress 5/31
Client download progress 6/31
Client download progress 7/31
Client download progress 8/31
Client download progress 9/31
Client download progress 10/31
Client download progress 11/31
Client download progress 12/31
Client download progress 13/31
Client download progress 14/31
Client download progress 15/31
Client download progress 16/31
Client download progress 17/31
Client download progress 18/31
Client download progress 19/31
Client download progress 20/31
Client download progress 21/31
Client download progress 22/31
Client download progress 23/31
Client download progress 24/31
Client download progress 25/31
Client download progress 26/31
Client download progress 27/31
Client download progress 28/31
Client download progress 29/31
Client download progress 30/31
Client download progress 31/31
Client downloaded file in 28.318518213s
Saved BegBlag.mps at /home/safe/.local/share/safe/client/BegBlag.mps
safe@ClientImprovementNet-Southside01:~/.local/share/safe/node$ safe -V
sn_cli 0.82.6
3 Likes

Did anyone else notice Storage Costs plummet from 690 nanos/MB to about 200 nanos/MB two or three hours ago?

I’m guessing somebody added a lot of nodes around then.

@joshuef :eyes:

Storage cost is the middle timeline, (top is earnings, bottom is PUTS):

Screenshot from 2023-09-25 17-13-17

10 Likes

Why does uploading batches of chunks take progressively longer over time for a large upload? Any ideas?

1 Like

Everybody using different version of client?
Or
Its harder to finish chunks then vs small size file