CloserNet [13/12/23] Testnet [Offline]

First track was fine, it didn’t like the “(” in the other one

safe files download “The Teskey Brothers - Hold Me (Official Video).webm” 2edd216e7b939080588c13815444bf9e472bcfad6137a242c7ba2f26f8bc9ef1
bash: syntax error near unexpected token `('

I’ll sort it later - need to run…

1 Like

weird, seems to work fine on my end (also in bash) :person_shrugging:

Client (read all) download progress 78/79
Client (read all) download progress 79/79
Client downloaded file in 29.268270551s
Saved The Teskey Brothers - Hold Me (Official Video).webm at /home/tom/.local/share/safe/client/The Teskey Brothers - Hold Me (Official Video).webm
2 Likes

The mem_usage of the testnet this time supposed to be much flatter than the last one.
I just checked the genesis node and one of our droplets (both have run for over 30 hours) and they seems consuming much less memory than last testnet.
Genesis is consuming 72MB and 20 nodes in the droplet mostly is below 40MB, with 3 around 70MB and one highest at 108MB .
The timing of the snapshot might affects, if the node was busying at something.
If the three nodes in your graph remains high with the mem_usage, please give me a shout, and upload the logs of those three if possible.
Thank you very much.

13 Likes

I think it is great to see how users use. Nobody can learn from what others do behind closed doors.

Anyway mostly recovered now except for these 3 madmen.

9 Likes

That three looks like in somekind of trouble?
would be interested to see what their logs say.

8 Likes

It will take a couple hours but I will share them when I get to a pc.

Edit:
@qi_ma I could not get funds from the faucet to upload to the network.

11 Likes

It is my impression too that memory use is lower. I’m not sure if the profile is different over time but for my ten nodes which have been running for 11 hours the memory use is 23 to 60 average 34 MB.

Storage cost is also more sensible: 13-45 average 27 nanos

12 Likes

So really batch_size has become upload_queue_size

3 Likes

test@test-pc-q35-7-2:~$ curl -sSL https://raw.githubusercontent.com/maidsafe/safeup/main/install.sh | bash

curl: (28) SSL connection timeout

frilly@chintz:~$ curl -sSL https://raw.githubusercontent.com/maidsafe/safeup/main/install.sh  | bash
**************************************
*                                    *
*         Installing safeup          *
*                                    *
**************************************
Will retrieve safeup for x86_64-unknown-linux-musl architecture
Latest version of safeup is 0.4.1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 4097k  100 4097k    0     0  1949k      0  0:00:02  0:00:02 --:--:-- 4424k
safeup installed to /home/willie/.local/bin/safeup

The safeup binary has been installed, but it's not available in this session.
You must either run 'source ~/.config/safe/env' in this session, or start a new session.
When safeup is available, please run 'safeup --help' to see how to install network components.

#WorksForMe

I’ll give a try on my laptop

1 Like

Prooobably. We should hopefully be tolerant enough to this, but it really also depends on node quantities!

Something to look into down the line :+1:

Okay thanks or this, useful to get more perspectives here. There’s a pending PR that will trigger a new CLI release, I’m hoping folk should get a more stable upload process with that

5 Likes

In some regards, but its also the number of chunks batched into payments here…

It may well be worth separating those concepts out. Mayybe no. Happy to hear thoughts on this as always!

6 Likes

Perhaps if an upload fails, instead of stopping it could automatically reduce the batch-size and restart … If you think in terms of a feedback loop circuit, batch-size and other bandwidth related magic numbers could be automated away.

4 Likes

looks like over night the runaway memory usage sorted itself out for most my nodes

— edit
that’s the last misbehaving node’s memory dropped back down to the base line with the rest.

i did have to shut down one machine as it had consumed around 500GB data in the last 18 hours

12 Likes

#metoo this is excellent news and was certainly not the case in previous tests.
What went up did not come down. :clap:

14 Likes

I’d be keen to see how folk suffering upload issues fare with the latest client 0.86.62 :eyes:

13 Likes

Hi, @Josh
Just had a quick scan through the log files you shared.
Due to the log file rotatioini, the 9 files for 3 nodes you shared covering the period of
node 1: 13th Dec 21:01:13 - 14th Dec 00:29:30 .
node 2: 13th Dec 22:22:26 - 14th Dec 00:31:30 .
node 3: 13th Dec 22:34:58 - 14th Dec 00:33:30
which in your garph, showing at least one node (or even all three nodes) had mem spike during the period.

But the logs showing the mem_usage was not wild. (searching keyword of memory_used_mb among the logs) :
one node remains under 40MB, and the other two remains under 70MB.
There is just total_mb_written having value over 400MB.

Could you let me how the graph collecting the statistics (parse from the log files or just using OS figures) ?
Thank you very much.

11 Likes

Seems to be working- reuploaded a missing chunk automatically :+1:

Starting to chunk "intro-to-llms.webm" now.
Chunking 1 files...
Uploading 325 chunks
⠤ [00:14:19] [#######################################>] 323/325                 

Retrying failed chunks 1, attempt 0/3...
**************************************
*          Uploaded Files            *
**************************************
"intro-to-llms.webm" 6c673b825760e616e2d41315216d1aeaecb8bc0011375987bd18718e73a1adf8
Uploaded 325 chunks (with 0 exist chunks) in 14 minutes 36 seconds
**************************************
*          Payment Details           *
**************************************
Made payment of 0.000020291 for 325 chunks
Made payment of 0.000003418 for royalties fees
New wallet balance: 199.999888709
10 Likes

Well really it just seems like you add chunks to the payment queue initially and start paying for those, then add more when those in the queue complete. Seems very much like a queue to me.

Maybe upload_batch_queue_size. That might make it clearer that its not a one out one in type of queue but a queue where there is no regard to order for completion. I don’t think there is a technical name for a queue where there is order for joining the queue but no order for exit (completion)

1 Like