QuicNet [30/01/24Testnet] [Offline]

Hey friend! … you know we may be distantly related … how about you lend us a small sum of your new found fortune so I can feed my fish? He’s soooo hungry. :laughing:
image

3 Likes

Hi, @stout77 and @Toivo ,

Could you please have a try to re-upload your failed chunks again?
And let me know whether it succeeded or not.

If still failed, please send me the log.

Thank you very much.

3 Likes

Can I use the same old CLI?

2 Likes

Hey @stout77 :wave:

I have noticed a trend up too. It does release but the trend seems to be it is getting worse.

3 Likes

yeah, you can.

2 Likes

the curves do looks in bad shape
will have a check
:thinking:

4 Likes

Failed still. I’ll DM the logs to you in a minute.

2 Likes

Chunking 52208 files…
⠁ [00:01:43] [###############################>--------] 40978/52208
memory allocation of 2097152 bytes failed

…and crashed some tabs in Brave browser

So I presume it needs to write the chunks to ram and since I dont run a paging file, hits the wall.

Maybe the program should check for available ram and the presence of a paging file, before starting the chunk?

2 Likes

It’s actually writing chunks to disk, or should be. cc @roland on this one :thinking:

1 Like

I just did and 8/10 of the previously failing chunks uploaded. Still stuck with 2 failing after two additional tries.

1 Like

could you share me the log of your last upload of that 2 failings? thx

Working now. :+1:

@qi_ma, what was it about?

1 Like

log_2024-02-01_00-09-27.zip (109.3 KB)

Which brings up the point…chunks are stored under C: on windows, which usually has the smallest capacity.

Not hard to overwhelm this capacity, so we should have the ability to assign a dedicated drive to this chunking task?

…and the concept was, once you log off from safe, all the files get disappeared…so all good there, it’s just the capacity to chunk really big files…

2 Likes

as we now launching testnet using node_manager, there is some reboot nodes causing verification key mismatch issue.
We have to terminate such nodes manually for Quicnet, and will have it properly addressed in next run

3 Likes

I have lost the two nodes with the highest mem you see there.

3 Likes

Yeh, that makes sense. That will probably come down the line :+1:

1 Like

Hi, @TylerAbeoJordan , could you have a try with re-upload again? thx

1 Like

@qi_ma the above has the complete log for one of my nodes that was killed due to an explosion in RAM use.

2 Likes

It would also help a lot if chunking / upload / delete of uploaded chunks were part of a batched process.

Can that be done with self encryption currently? It would need pause resume so maybe but supported yet.