CloserNet [13/12/23] Testnet [Offline]

How many nodes is that for? If you upload the logfiles for one node which has at least one match in it, ideally one with lots of records I’ll take a look.

For two. I’ll DM you the logs.

1 Like

Hmm, honestly not sure. Both should be validating against the net. cc @roland , not sure if anything occurs to you?

"12D3KooWNwN-logs.tar.gz" 59d0859970bcc438733924bd5a2783117d22a865a16d78f122a87724956a01ea

First try…

Error:
0: Network Error Could not retrieve the record after storing it: 6b15aed0bc8536a09639b3ccf3141027893b0c34aaac0922e683d7d6feda7dab(ea3d0d6d85bc6bcb67d4b371bb8658e761c28d92c083491ea9dc0776a3720624).
1: Could not retrieve the record after storing it: 6b15aed0bc8536a09639b3ccf3141027893b0c34aaac0922e683d7d6feda7dab(ea3d0d6d85bc6bcb67d4b371bb8658e761c28d92c083491ea9dc0776a3720624)

Location:
sn_cli/src/subcommands/files/mod.rs:424

There seem to four types of PUT in the logs


ValidSpendPutFromClient
ValidSpendRecordPutFromNetwork
ValidPaidChunkPutFromClient
ValidChunkRecordPutFromNetwork

Counting them all gets to the right number-ish - 116 for the logged node.

2 Likes

@joshuef

So after two additional failures, I started lowering the batch size. At BS 10 I was able to upload a couple of chunks. Had to go down to BS 5 to get it to fully upload.

Perhaps due to whatever changes you made from last testnet, the default batch size needs to be smaller maybe – well for me and my slow connection in Tassie anyway. :wink:

3 Likes

That suggests we’re running different node versions or running with different logfile output. :man_shrugging: Will look into it, thanks.

@joshuef do you know any reason why some nodes would be outputting different logfile messages for PUTS than others? They appear to be the same version reported in the logfile but JPL reports different messages from usual, whereas mine seem normal. Toivo has the same issue as JPL.

@JPL and @Toivo can you confirm what SN_LOG setting you are using when starting your node(s)?

1 Like

Out of the box setting :slightly_smiling_face:

1 Like

Which is?

tenchars

1 Like

Yeh, that may well be the case actually, we continually hammer more, so it would be more intense on the connections.

Can more folk have a play with different batch sizes to see what suits best here?

SN_LOG=all is what jumps to mind, but doesnt seem to be it :thinking: ? (logs are set here: safe_network/sn_logging/src/layers.rs at main · maidsafe/safe_network · GitHub)

2 Likes

@JPL and @Toivo is is possible that you’ve omitted to use SN_LOG=all when starting your nodes?

I note that neither of your logs contain any messages with “Wrote record” in them as here:

[2023-12-13T15:27:33.844872Z TRACE sn_networking::record_store] Wrote record 0e565b3f8b9f0cc8d05b5eba55027ffa98d34d7dde82aeb49afba2754eed9994(7c7eae8b63f6dcf824feb64eb2788e81c9f3cd2d605f58ac9356f3855677999d) to disk! filename: 0e565b3f8b9f0cc8d05b5eba55027ffa98d34d7dde82aeb49afba2754eed9994

If you are using SN_LOG=all there may be another reason why this message isn’t being logged, but I don’t know the reason because it is in my node which is the same build as yours.

1 Like

How to find out? I just started the nodes.

In most of the uploads I have the error: "Could not retrieve the record after storing it ".

In the last upload, on the other hand, I got this one:

Error:
0: Failed to pay for chunks: Transfer Error Failed to send tokens due to No Store Cost Responses.

Location:
sn_cli/src/subcommands/files/mod.rs:264

8 new nodes joining from home! :raised_hands:

3 Likes

I thought that wasn’t a requirement any more? OK that will be it then.

echo $SN_LOG in the terminal you used to start the nodes will show the value, but this will only be correct if you used export SN_LOG=all to modify the environment.

You could look at your command history in that terminal with history | grep SN_LOG to see if you set it before starting the node, or on the node command line.

Yorr avin a larf :rofl:

This is all I have done there. I never set up anything else than just start the node:

topi@topi-HP-ProBook-450-G5:~$ safeup node --version 0.100.3
**************************************
*                                    *
*          Installing safenode       *
*                                    *
**************************************
Installing safenode for x86_64-unknown-linux-musl at /home/topi/.local/bin...
Installing safenode version 0.100.3...
  [########################################] 8.91 MiB/8.91 MiB                  safenode 0.100.3 is now available at /home/topi/.local/bin/safenode
topi@topi-HP-ProBook-450-G5:~$ nohup safenode --port=12000 &
nohup safenode --port=12001 &
[1] 63510
[2] 63511
topi@topi-HP-ProBook-450-G5:~$ nohup: ignoring input and appending output to 'nohup.out'
nohup: ignoring input and appending output to 'nohup.out'


EDIT:

Buuut actually the history | grep SN_LOG gave this:

 1120  export SN_LOG=all safe files
 1121  time SN_LOG=all safe files upload -c 5 --batch-size 5 ToivosPORTFWD.png
 1126  export SN_LOG=all safenode
 1130  export SN_LOG=all safenode
 1133  export SN_LOG=all safe files
 1141  export SN_LOG=all safe
 1161  export SN_LOG=all safenode
 1171  export SN_LOG=all safenode
 1180  export SN_LOG=all safenode
 1184  export SN_LOG=all safenode
 1188  export SN_LOG=all safenode
 1195  export SN_LOG=all safenode
 1201  export SN_LOG=all safenode
 1211  export SN_LOG=all safe
 1586  history | grep SN_LOG

EDIT 2:

Still in the previous testnet the PUT record was, if not correct, not so wildly off… :thinking:

Ah ok, if you want to use vdash to monitor a node you need to have SN_LOG=all set. You can do this in one of two ways.

Either:

export SN_LOG=all
safenode

or

SN_LOG=all safenode

And you could save yourself the trouble of doing this every time by putting export SN_LOG=all in your ~/.bashrc file (assuming you use the bash shell).

You may be having the same problem as me – as a failsafe try:

safe files upload --batch-size 5 “filename”