Hey friend! … you know we may be distantly related … how about you lend us a small sum of your new found fortune so I can feed my fish? He’s soooo hungry.
Could you please have a try to re-upload your failed chunks again?
And let me know whether it succeeded or not.
If still failed, please send me the log.
Thank you very much.
Can I use the same old CLI?
Hey @stout77
I have noticed a trend up too. It does release but the trend seems to be it is getting worse.
yeah, you can.
the curves do looks in bad shape
will have a check
Failed still. I’ll DM the logs to you in a minute.
Chunking 52208 files…
⠁ [00:01:43] [###############################>--------] 40978/52208
memory allocation of 2097152 bytes failed
…and crashed some tabs in Brave browser
So I presume it needs to write the chunks to ram and since I dont run a paging file, hits the wall.
Maybe the program should check for available ram and the presence of a paging file, before starting the chunk?
It’s actually writing chunks to disk, or should be. cc @roland on this one
I just did and 8/10 of the previously failing chunks uploaded. Still stuck with 2 failing after two additional tries.
could you share me the log of your last upload of that 2 failings? thx
Working now.
@qi_ma, what was it about?
log_2024-02-01_00-09-27.zip (109.3 KB)
Which brings up the point…chunks are stored under C: on windows, which usually has the smallest capacity.
Not hard to overwhelm this capacity, so we should have the ability to assign a dedicated drive to this chunking task?
…and the concept was, once you log off from safe, all the files get disappeared…so all good there, it’s just the capacity to chunk really big files…
as we now launching testnet using node_manager, there is some reboot
nodes causing verification key mismatch
issue.
We have to terminate such nodes manually for Quicnet, and will have it properly addressed in next run
I have lost the two nodes with the highest mem you see there.
Yeh, that makes sense. That will probably come down the line
Hi, @TylerAbeoJordan , could you have a try with re-upload again? thx
@qi_ma the above has the complete log for one of my nodes that was killed due to an explosion in RAM use.
It would also help a lot if chunking / upload / delete of uploaded chunks were part of a batched process.
Can that be done with self encryption currently? It would need pause resume so maybe but supported yet.