CloserNet [13/12/23] Testnet [Offline]

I’m not aware of any changes that should affect that - what are others seeing? My PUTS look about right.

Latest vdash is 0.16.0 yes?

2 Likes

Yes correct

3 Likes

First upload attempt failed.

Error:
0: Network Error Could not retrieve the record after storing it: 43aeb80a7b41716a4b2a44a0fa51ec576a82371bdec2d203d592d9263e432326(6b86c6ce78cbb3df1be04aa3baec4133be260beef28290ffd037f93f9368b705).
1: Could not retrieve the record after storing it: 43aeb80a7b41716a4b2a44a0fa51ec576a82371bdec2d203d592d9263e432326(6b86c6ce78cbb3df1be04aa3baec4133be260beef28290ffd037f93f9368b705)

log:
safe.log.zip (2.9 MB)

Second attempt failed:

Input was split into 188 chunks
⠁ [00:00:00] [----------------------------------------] 0/188 Will now attempt to upload them…
Error:
0: Network Error Could not retrieve the record after storing it: 2813f7ace06f65f4179736b16ee514dfcb77e4e0647f4a92308df1757c20d697(f1fa77417c40f1f85e9fdc0da01458932308c1d5a4807903e4886cfcfa1918e8).
1: Could not retrieve the record after storing it: 2813f7ace06f65f4179736b16ee514dfcb77e4e0647f4a92308df1757c20d697(f1fa77417c40f1f85e9fdc0da01458932308c1d5a4807903e4886cfcfa1918e8)

safe.log.zip (1.9 MB)

Should I just keep trying over and over again? On the second attempt it seems it didn’t even upload one chunk.

1 Like

It’s probably failing to get the paid record there then. Keep trying a bit to see how it goes for a few more runs please

I suspect I might have dialed down the validation waits too low here. :thinking:

1 Like

@joshuef, could you elaborate a bit what this actually means, please? Do you think it’s worth testing different batch sizes? What was this batch again, I have forgotten? What is the default batch size?

2 Likes

Thanks so much to the entire Maidsafe team for blessing us with this new testnet! :clap: :clap: :clap:

And also to all of our volunteer testers! :clap: :clap: :clap:

14 Likes

@happybeing Vdash is showing me 2 PUTs for both of my 2 nodes, but the record store folders have about 80 items each.

6 Likes

A whole gigabyte of your favourite random combinations of ones and zeros here, cos it’s nearly Xmas.
Uploaded without a hitch in 45 mins 50 sec.


"1g" ee88d41b4e7b9135bcd09193858ef1d29d2c5e4218766ef07a2264634f4c0c57

md5sum 6da91356e03dec123d0987fab579a625
12 Likes

Same issue!

...
/ip4/178.128.33.104/tcp/45905/p2p/12D3KooWDNE9VwRCpbyRaEnaT3mjPQsSoS4QNx2GSLC1dFBFvhvN
/ip4/138.68.161.31/tcp/46863/p2p/12D3KooWF4tPoWb3Gu6mR7WqbQeSs8ZbWX9UZHfw7rSTtN7sYe4C
🔗 Connected to the Network                                                                             Starting to chunk "/Users/thomasrivoalan/Desktop/Flipper-mac.dmg" now.
Chunking 1 files...
Input was split into 169 chunks
Will now attempt to upload them...
Error: 
   0: Network Error Could not retrieve the record after storing it: 0196725fb1062efb455295b8b7ef5040ee3e5d6a3e9642fd09f1503cdcf84fb5(d31b32c7238237d5b8e3bddd06ef8a80f4755f82338c7f078f173193d2ed7577).
   1: Could not retrieve the record after storing it: 0196725fb1062efb455295b8b7ef5040ee3e5d6a3e9642fd09f1503cdcf84fb5(d31b32c7238237d5b8e3bddd06ef8a80f4755f82338c7f078f173193d2ed7577)

80mb file.
Upload got stuck at 168/169 for a few minutes before throwing the error.

Upload and download were fine for another 10mb file though!
And no problem with the ubuntu file, super fast download :grinning:

4 Likes

So previously, we did one batch at a time. We did not start a new batch until all chunks were completed. So one straggler held everything up.

So, we batch chunks for payments ie, we pay for batch_size chunks together. This helps prevent too much waiting on verifying the payment.

That doesn’t change. But now we continuously are adding to the upload queue, so as soon as one chunk completes, we’re starting payment for the next batch and uploading a new chunk.

Ie, there should always be batch_size chunks uploading in parallel now (with small interludes for payment at the start of each batch). So less wasted time.

7 Likes

Trying again has worked for me every time so far.

6 Likes

Thanks, this is exactly what I wanted to know! :+1:

I tried to upload with batch size 60 and that lead about half of the chunks in need of re-uploading. It also jammed my over sensitive router. Possibly the amount of connections were that much higher for a large parallelism. I might try to shoot for a batch size that is a tad lower than default. So could you still give me the default number, please?

4 Likes

Indeed! Worked for me as well.
And it uploaded only the one last chunk on which it got stuck on previous attempt.
Then download was fine.
:clap:

11 Likes

This might be stupid idea, but how about allowing the client who uploaded file(s) heal the chunks that are missing by checking the health of the stored chuncks (e.g. which are present and which are lost) and then allow him to re-upload the missing ones for free?

This would be more of an emergency bandage but a very useful one.

4 Likes

default is 64

Not at all.

We’re writing such things right now.

We had a similar mechanism to this, but it was a bit sprawling, and was lost in a strip back/refactor. I’m adding in some basic catch/retry over chunking now, actually

10 Likes

Hi all,

This is starting to look really promising, well done to all involved.

My main question is (sorry if this is obvious to some)

What big ticket items are needed to be:

  1. Built from scratch
  2. Fine tuned

Before a MVP can go live?

Thanks

6 Likes

After CTRL+C ing a big upload the retrials now give me this error:

Error: 
   0: Transfer Error I/O error: Could not read and deserialize wallet file after multiple attempts.
   1: I/O error: Could not read and deserialize wallet file after multiple attempts
   2: Could not read and deserialize wallet file after multiple attempts

Location:
   sn_cli/src/subcommands/files/mod.rs:159

Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

Deleting the wallet and re-fauceting solved the problem.

2 Likes

@JPL and @Toivo thanks for your reports. Please can you try the following and see what happens (maybe pipe the output in to wc -l to count the matches):

grep -e "Wrote record\|ValidSpendRecordPutFromNetwork" ~/.local/share/safe/node/*/logs/safenode.log*

Is anyone else seeing very few PUTS in vdash and if not, what version are you running?

I got 5 of them, which matches the numberd on Vdash.

2 Likes

10 for my three nodes. Same as per vdash.

root@localhost:~# grep -e “Wrote record|ValidSpendRecordPutFromNetwork” ~/.local/share/safe/node

Summary

//logs/safenode.log
/root/.local/share/safe/node/12D3KooWEgsuwJSNDKcCu1jR6aczVtm1GsxiFiTXA6cwiKV1KP5h/logs/safenode.log.20231213T123531:[2023-12-13T10:58:42.483513Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(00274925549c41447033d77248af6ddbbece06635d7924f5d081220eb6ba2b09(97c90b07eeb3ca4cb2d8bee8b5ca626b80a207dd7ebc7e9537464b3eb8ef29b1))
/root/.local/share/safe/node/12D3KooWNwNjFMY5k2Akifx8umDdJYHjHognDUkdEdBXZSgqd6vg/logs/safenode.log.20231213T122712:[2023-12-13T10:58:29.225595Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(2c1449f429cf59d7d88a7c562554eec4ce6a13bffd985b4cf32edc70baf72cc2(05fc70ae90a3a5e204c7390396cc3abca43d5d1f267653f5ca5924013ad40935))
/root/.local/share/safe/node/12D3KooWNwNjFMY5k2Akifx8umDdJYHjHognDUkdEdBXZSgqd6vg/logs/safenode.log.20231213T134403:[2023-12-13T12:51:53.655596Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(ea5c95776e980151d8c8c9f3909eb89b8d7f025bd26bac924ae79e6e60954897(05c4f77700916c509e5a769ac1c8156ec3ae131a1ee85a66e5b73967cc6f6d67))
/root/.local/share/safe/node/12D3KooWNwNjFMY5k2Akifx8umDdJYHjHognDUkdEdBXZSgqd6vg/logs/safenode.log.20231213T134403:[2023-12-13T12:53:28.151102Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(412ce132c97f7acc8cf4dee9bd6e2fffa948dff78260880f20b4a437e1bceb86(05ea5c1b29c2e6ea8a81f528d5a3a1a3f3bf733e0c2dde7b7f4911a366699601))
/root/.local/share/safe/node/12D3KooWNwNjFMY5k2Akifx8umDdJYHjHognDUkdEdBXZSgqd6vg/logs/safenode.log.20231213T134403:[2023-12-13T12:53:28.542911Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(412ce132c97f7acc8cf4dee9bd6e2fffa948dff78260880f20b4a437e1bceb86(05ea5c1b29c2e6ea8a81f528d5a3a1a3f3bf733e0c2dde7b7f4911a366699601))
/root/.local/share/safe/node/12D3KooWPqRqzA5z9fYTyzQh7CDu6WiuXevZqtXkvnxAaRcsLqDD/logs/safenode.log:[2023-12-13T13:36:39.439346Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(76bc1b2fb9b2717ea109cdac77a992b3b0e27a391f105b8f930c6d93e138f13a(e516b4e37afcd6048eb8675a7fea21731f1e9a27c864d0fdef06550982073b5c))
/root/.local/share/safe/node/12D3KooWPqRqzA5z9fYTyzQh7CDu6WiuXevZqtXkvnxAaRcsLqDD/logs/safenode.log:[2023-12-13T13:36:39.820434Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(76bc1b2fb9b2717ea109cdac77a992b3b0e27a391f105b8f930c6d93e138f13a(e516b4e37afcd6048eb8675a7fea21731f1e9a27c864d0fdef06550982073b5c))
/root/.local/share/safe/node/12D3KooWPqRqzA5z9fYTyzQh7CDu6WiuXevZqtXkvnxAaRcsLqDD/logs/safenode.log:[2023-12-13T14:20:56.523580Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(039777716fd3d86f42334dddb7f62ac774d8ff96ee4708e34b4e859ceb5bcc96(e502baae3955bbabbe7a84415cf5387fa20ed715090d0545d11f0dc1700de252))
/root/.local/share/safe/node/12D3KooWPqRqzA5z9fYTyzQh7CDu6WiuXevZqtXkvnxAaRcsLqDD/logs/safenode.log.20231213T131003:[2023-12-13T10:58:19.204017Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(4ad607851f0de80339695d2761cfca5933431b2ccc801e95b84adb9e89643c6e(e53a16bca08c41b6f7fb793f582ce0de266e0cc6e8d82480fb8052c20c5fd9e6))
/root/.local/share/safe/node/12D3KooWPqRqzA5z9fYTyzQh7CDu6WiuXevZqtXkvnxAaRcsLqDD/logs/safenode.log.20231213T131003:[2023-12-13T10:58:19.396520Z INFO sn_node::log_markers] ValidSpendRecordPutFromNetwork(c50d26e1c918fea278498a18023f2a22ee09879de057ff3b226e965f45f457ec(e5842bca0923510bd6cabca20498989096eec7473126a9ba37e03aa42464883c))
root@localhost:~# grep -e “Wrote record|ValidSpendRecordPutFromNetwork” ~/.local/share/safe/node//logs/safenode.log | wc -l
10

1 Like