ClientImprovementNet [22/09/23 Testnet] [Offline]

iv got two nodes down on one of my boxes as well

Timestamp: Fri 22 Sep 16:02:29 EDT 2023
Node: 12D3KooWSu8YTYUgTdQrsw187czrNPgV1HJjUhA137tZxZrkircE
PID: 1904
Memory used:
CPU usage:
ls: cannot access '/proc/1904/fd/': No such file or directory
File descriptors: 0
IO operations:
cat: /proc/1904/io: No such file or directory
ls: cannot access '/proc/1904/task/': No such file or directory
Threads: 0
Records: 34
Disk usage: 13MB

Node wallet balance  0.000000000

BegBlag test is a pass :slight_smile:

safe files download BegBlag.mps 80744b3d25bab269cab54e8baccf4f54f1aa01615230b99171bc3576c1ca7230
1 Like
Client download progress 30/31
Client download progress 31/31
Client downloaded file in 38.446737256s
Saved BegBlag.mps at /home/willie/.local/share/safe/client/BegBlag.mps

:confetti_ball:

2 Likes

OK, so here is some results from me, this time no WiFi, but cable connected. I uploaded this same file several times with several different combinations of -c and -batch parameters. File size is 59,7MB / 115 chunks.

My network speed is about 48Mbps up / 385Mbps down, according to Ookla speed test.

safe files download Waterfall_slo_mo.mp4 bd346b6e20c1d419ece8528c34f670cbaa9b23a93d7b37330c87123131da979d
-c 200 --batch-size 5 Waterfall_slo_mo.mp4

real	14m23,744s
user	6m2,052s
sys	0m36,842s

-c 200 --batch-size 10 Waterfall_slo_mo.mp4

real	8m20,750s
user	8m16,738s
sys	0m35,690s

-c 200 --batch-size 20 Waterfall_slo_mo.mp4

real	5m15,105s
user	8m5,337s
sys	0m28,740s

-c 200 --batch-size 30 Waterfall_slo_mo.mp4

real	4m11,113s
user	7m50,254s
sys	0m24,982s

-c 200 --batch-size 50 Waterfall_slo_mo.mp4

real	4m11,623s
user	10m52,950s
sys	0m27,994s

-c 200 --batch-size 60 Waterfall_slo_mo.mp4

real	3m41,742s
user	12m26,758s
sys	0m27,309s

-c 200 --batch-size 115 Waterfall_slo_mo.mp4

Failed to fetch 22 chunks. Repaid and re-uploaded 22 chunks in 55.454232355s
real	4m39,666s    
user	17m50,913s
sys	0m28,782s

-c 400 --batch-size 60 Waterfall_slo_mo.mp4

real	4m9,484s
user	13m36,719s
sys	0m29,138s

-c 100 --batch-size 60 Waterfall_slo_mo.mp4

real	4m3,479s
user	14m42,355s
sys	0m31,673s

-c 50 --batch-size 60 Waterfall_slo_mo.mp4

real	4m34,814s
user	13m49,108s
sys	0m30,824s

-c 200 --batch-size 60 Waterfall_slo_mo.mp4
Failed to fetch 1 chunks. Repaid and re-uploaded 1 chunks in 23.273716508s
real	4m24,964s
user	14m30,451s
sys	0m32,085s

-c 20 --batch-size 60 Waterfall_slo_mo.mp4

real	4m50,402s
user	16m32,342s
sys	0m33,611s

Some conclusions:

Big batches get these Failed to fetch...
-c doesn’t make so much a difference.

By the way, nice to witness, that operating a node is light and client tasks are heavy. That alone is front page news! (Compare to Bitcoin.)

I’m calling it a night, good night everyone, and thank you!

8 Likes

A couple of screenshots from vdash v0.8.12 with ClientImprovementNet test after six hours:

1 minute column timelines

1 hour column timelines - shows the trend in storage cost more clearly:

7 Likes

but from home I get this

willie@gagarin:~$ safe -V
sn_cli 0.82.0
willie@gagarin:~$ safe files download Waterfall_slo_mo.mp4 bd346b6e20c1d419ece8528c34f670cbaa9b23
Built with git version: 2c47455 / main / 2c47455
Instantiating a SAFE client...
🔗 Connected to the Network                                                                                                                          The application panicked (crashed).
Message:  Failed to parse XorName from hex string: [189, 52, 107, 110, 32, 193, 212, 25, 236, 232, 82, 140, 52, 246, 112, 203, 170, 155, 35]
Location: sn_cli/src/subcommands/files.rs:97

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

I cant upload anything either

willie@gagarin:~$ safe files upload -c50 --batch-size=5 /home/willie/.local/share/safe/client/BegBlag.mps
Built with git version: 2c47455 / main / 2c47455
Instantiating a SAFE client...
🔗 Connected to the Network                                                                                                                          Total number of chunks to be stored: 32
Error: 
   0: Transfer Error Bincode error:: deserialized bytes don't encode a group element.
   1: Bincode error:: deserialized bytes don't encode a group element
   2: deserialized bytes don't encode a group element

Location:
   sn_cli/src/subcommands/files.rs:195

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
1 Like

Why .mps?

renamed to mp3 and its fine

so I am at a loss with this

deserialized bytes don’t encode a group element

error.

only relevant log entries I see with SN LOG=all is

[2023-09-22T23:36:55.417282Z DEBUG safe::subcommands::files] Uploading file(s) from "/home/willie/.local/share/safe/client/BegBlag.mps", will verify?: true
[2023-09-22T23:36:55.417290Z TRACE safe::subcommands::files] Starting to chunk "/home/willie/.local/share/safe/client/BegBlag.mps" now.

everything else is just network msgs

1 Like

C’mon mate, you didn’t paste the full address, you can do better than that :sweat_smile:

Downloaded fine for me

3 Likes

oooops!!! made a right cuntnpaste of that

errr

willie@gagarin:~/.local/share/safe/client$ SN_LOG=ALL  safe  --log-output-dest=datadir files download Waterfall_slo_mo.mp4 bd346b6e20c1d419ece8528c34f670cbaa9b23a93d7b37330c87123131da979d
Logging to directory: "datadir"
Using SN_LOG=ALL
Built with git version: 2c47455 / main / 2c47455
Instantiating a SAFE client...
🔗 Connected to the Network                                                                                                                               Downloading Waterfall_slo_mo.mp4 from bd346b6e20c1d419ece8528c34f670cbaa9b23a93d7b37330c87123131da979d
Client download progress 1/114
Client download progress 2/114
Client download progress 3/114
Client download progress 4/114
Client download progress 5/114
Client download progress 6/114
Client download progress 7/114
Client download progress 8/114
Client download progress 9/114
Client download progress 10/114
Client download progress 11/114
Client download progress 12/114

Well at least something is working from home …

I expect we will soon get a check that the address given to the download command is valid XorName rather than panicking.

and relax…

4 Likes

Can anyone explain what happened with 0.90.33 ?
And can any conclusions be made from operation of network which have mixed versions of nodes (0.90.33, 0.90.34, 0.90.35)?

4 Likes

You mean why it didn’t work?

The nodes that make up the network were built from main 0.90.34 essentially, but the release process failed to publish a release for this so 0.90.33 was incompatible (there should have been been a minor v change here too).

I’m not entirely sure why it didn’t publish there, but we’d had a few release process issues after 1) trying to update a dep which ended up requiring a lot of CI reconfig and ended up then being incompatible with arm builds which 2) mean we’d published a lot of versions thursday when I was trying to fix it which 3) meant some release CI worflows were refusing to publish the builds because crates.io limits the amount of releases in 24hrs .

I suspect that’s related but I’m not sure. There’s some bits to look at here both in the release flow and the testnet flow (that we build the code for a testnet as opposed to using the supplied bins eg. if we’d done that we’d have caught that issue earlier. @chriso is already on this).

It’s all something well be keeping an eye on.

0.90.33 wont actually be participating in the network as we’ve seen above.

The change to 0.90.35 shouldn’t impact anything (nor tell us anything) as it was an unused feature that was added.

Sadly non of the v changes tell us anything about the underlying network versioning as from broken->working vs the underlying protocol did change, so it should have been incompatible, if that makes sense?

6 Likes

It may be worth thinking about how to notify user about incompatible versions.
I figured it out mostly intuitively. It is not reliable method of diagnosing problems.

What was suspicious in logs is this line:
[2023-09-22T12:54:35.922852Z TRACE sn_networking::event] KademliaEvent: UnroutablePeer peer_id=12D3KooWE75czdXUnZJ59gtMDwNZCyBx24whf9WXbNTmEoCaiUrA
and later
[2023-09-22T12:56:23.912284Z WARN sn_node::api] get_closest query failed after network inactivity timeout - check your connection: Could not get enough peers (8) to satisfy the request, found 1

But probably something like Warning: Incompatible version should be printed instead.

1 Like

Jippy, I finally uploaded

======= Verification complete: all chunks paid and stored =============
Uploaded all chunks in 89 minutes 38 seconds
Uploaded Neuromancer.mp4 to 0a6b07c380372d1d7f23a5783d97ddbe5c2358db9df38aeb625c38a7c3562aa5

And downloaded

eddy@Hal9000:~$ safe files download Neuromancer.mp4 0a6b07c380372d1d7f23a5783d97ddbe5c2358db9df38aeb625c38a7c3562aa5
Built with git version: 2c47455 / main / 2c47455
Instantiating a SAFE client...
🔗 Connected to the Network                                                     Downloading Neuromancer.mp4 from 0a6b07c380372d1d7f23a5783d97ddbe5c2358db9df38aeb625c38a7c3562aa5

Client downloaded file in 1048.056986772s
Saved Neuromancer.mp4 at /home/eddy/.local/share/safe/client/Neuromancer.mp4

A big file for the first time :partying_face: thx to all you super ants who make this possible :exploding_head:

12 Likes

I started an 2,1GB / 4058 chunks upload yesterday evening with -c 50 --batch-size 20.

Now in the morning I found out, it had not succeeded, just 6 chunks short:

======= Verification: 6 chunks were not stored. Repaying them in batches. =============
Failed to fetch 6 chunks. Attempting to repay them.
Cannot get store cost for NetworkAddress::ChunkAddress(540306(01010100).. -  - 54030663fe399acaa14ebf0f2cc146908cfcaa5a801c33c5d535f3162913d99e) with error CouldNotSendMoney("Network Error Not enough store cost prices returned from the network to ensure a valid fee is paid.")
Transfers applied locally
All transfers completed in 17.862961583s
Total payment: NanoTokens(10731) nano tokens for 5 chunks
Uploaded chunk #9810c3.. in 8 seconds
Uploaded chunk #eb0f4f.. in 10 seconds
Uploaded chunk #d12bb6.. in 10 seconds
Uploaded chunk #255f64.. in 11 seconds
Uploaded chunk #a9b056.. in 11 seconds
Error: 
   0: Chunks error Failed to get find payment for record: 54030663fe399acaa14ebf0f2cc146908cfcaa5a801c33c5d535f3162913d99e.
   1: Failed to get find payment for record: 54030663fe399acaa14ebf0f2cc146908cfcaa5a801c33c5d535f3162913d99e

Location:
   sn_cli/src/subcommands/files.rs:297

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

real	228m55,887s
user	659m6,345s
sys	15m9,114s

What might be some reasons for these errors?

It’s also quite bad that just 6 failed chunks fails the whole process. I hope in the future there will be a way, where this does not happen.

5 Likes

There should be a way to just continue upload after interruption (of any kind).

5 Likes

After sixteen hours vdash timelines (with 1 hour columns) for:

Earnings (total 4711, scale 0-798 nanos)

Storage Cost (scale 0-73 nanos)


PUTS (total 144, scale 0-67)

vdash0.8.12-ClientImprovementNet-1hr-mini-chart

15 Likes

And thats good?

I hope it will be used soon, this gossipsub sounds juicy, with archiving nodes etc…right?

Edit:
Some interesting additional reading on the topic;

https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/episub.md

:fire:

6 Likes

Great improvements on this one !!
After some config battle with errors that seemed specific to my machine (mac), I could upload and download just fine yesterday, small files AND big files (200mb).

But today I get this “SelfEncryption” error for all download attempts :

➜  ~ SN_LOG=ALL sudo -E safe files download Neuromancer.mp4 0a6b07c380372d1d7f23a5783d97ddbe5c2358db9df38aeb625c38a7c3562aa5
Built with git version: 2c47455 / main / 2c47455
Instantiating a SAFE client...
🔗 Connected to the Network                                                     Downloading Neuromancer.mp4 from 0a6b07c380372d1d7f23a5783d97ddbe5c2358db9df38aeb625c38a7c3562aa5
Error downloading "Neuromancer.mp4": SelfEncryption Error A generic I/O error.

Anyone else getting that ?

4 Likes

Worked for me. Thanks! Read the book ages ago, loved it.

Client download progress 1004/1004
Client downloaded file in 961.453831898s
Saved Neuromancer.mp4 at /home/user/.local/share/safe/client/Neuromancer.mp4
1 Like

Eh, same here, couldn’t resist…

Client downloaded file in 1378.413956969s
Saved Neuromancer.mp4 at /home/ubuntu/.local/share/safe/client/Neuromancer.mp4
1 Like