HeapNet2 [Testnet 12/10/23] [Offline]

Regarding tooling folder …

I never delete the ~/safe folder when starting new - I just delete the ‘client’ folder inside it. So why not just put tooling and other user made stuff in a ~/safe/local folder? I’m maybe missing something.


So I’ve done my regular round of uploads and everything went up without having to do any full redo’s - process didn’t stop until finishing the upload despite some failed chunks – so IMO, a nice win for the upload process there for me.

On the download side, it failed on the intermediate sized files (few megabytes) up to my one large file (few hundred megabytes). I haven’t retried the failures yet - but I suppose the data is there, the process just stopped with “Chunks error Chunk could not be retrieved from the network”.

3 Likes

Sorry not toyed with it enough to have a solid opinion.

Yeah, I agree with Josh that safe feels like a place for Maidsafe’s stuff.
Also every keystroke saved is a win for me and .local/share/safe/tools/ntracking/became/a/pain.

4 Likes

Aye we need to improve the flow here. As we’ve only been focussed on relatively small dirs etc so far this has slipped under the UX radar. Should be easy enough to improve things here and get us uploading while chunking continues :+1: That would probably speed things up a good amount there.

(cc @Anselme if our batch is about how many chunks we pay, we could theoretically keep that one batch’s CNs in mem thus avoiding a lot of the disk IO we’ve seen there :thinking: we’d still have to write to disk, but that could happen lazily perhaps)

:muscle: This feels like we dont need the “lost node” fix, and it’s a matter of patience (and some other tweaks to bootstrapping may help here), but basically that this is the way forwards.

Can you have a go with the latest client release :bowing_man: the batched downloading may well help here.

The suspicion is that we throw too much at the client which causes issues. Batched downloads seem to work much more reliably.

This should not be seen with newer clients; this isn’t an error so much as a log about where data is coming from. You can ignore it for now. We’ll be using this to debug “lost” data (which may actually not be lost and be more about client usage as above w/ dl batching.

Indeed, it’s open for testing to determine a better default (which is 20, atm).

It may be we can limit things to a % mem or CPU which may be a more efficient and “real” constraint. We’ll be exploring that.

Try with the latest client please. (0.83.48)

7 Likes
safeup client --version 0.83.48
**************************************
*                                    *
*          Installing safe           *
*                                    *
**************************************
Installing safe for x86_64-unknown-linux-musl at /home/anon/.local/bin...
Installing safe version 0.83.48...
Error: Could not determine content length for asset

Location:
    src/s3.rs:49:28

Maybe the latest is .47?

3 Likes

I noticed slightly different effect for my node:
It have 171 records but only 4 cash_notes.
Which means that 98% of data was stored for free.

If 1 peer bootstrap fails, then no much sense to expect anything except 100% of peers to work.

Same thoughts for me.
It’s like running onto the road and demanding every car to stop “because I want to walk right here and right now”.

Such topic already exists:

1 Like

After 39 hours I have 18/20 earning and the number of chunks averages 199 with a max of 374. It isn’t clear if patience is what’s called for here but I think it is something we can look at when the network is much bigger.

For this network the number of PUTS in the late nodes is still very low, at 6 for the two most recent, whereas earlier nodes are at 200+. Some of that difference is no doubt because having been around longer you’ll get PUTS from churn. But to have two nodes with only 6 PUTS suggests uneven address distribution. So maybe that’s one to watch when we can test with a lot more nodes?

My very large upload is proceeding much more slowly so I will have to stop it if it doesn’t speed up during the day. The slow progress may just be that it is actually uploading (using latest v0.83.47 client and SNT in my wallet :man_facepalming:). I’m seeing the “For record … fetch completed with non-responded expected holders” error quite regularly (say one per 30 mins) but otherwise it looks like it is progressing ok as far as I can tell. I just don’t think I want to try keeping it running for a further 90 hours to complete (I’m on a boat remember). I need to check my mobile data use before deciding!

I had 70GB remaining yesterday, let me check now… yeh. It has used 30GB overnight so I’ll stop it now as it doesn’t renew until November! So that looks like it will take >~ 300GB to upload 27GB of files :thinking: Those are not reliable calculations but another area to begin poking soon! Anyway, I’ve now killed it. :cold_sweat:

9 Likes

Well .47 updated, so probably you are right.

1 Like

@joshuef I made some observations:

3 Likes

.45 was in OP and there’s two PR’s merged after that. It makes 47.

2 Likes

thanks @joshuef

upgraded to

sn_cli 0.83.47

and now its working from home connection :slight_smile:

Client downloaded file in 69.37297179s
Saved AnarchyInTheSouthside.mp3 at /home/ubuntu/.local/share/safe/client/AnarchyInTheSouthside.mp3

@Toivo @Southside

AnarchyInTheSouthside.mp3 eaa0b39813183323b491e6715a0b4ea9f3cfdeede6f04a73c5b849b5210ad20d
1 Like

That settles it, thanks for the clarification.

I re-submitted the same file to the network and it took 54 seconds to upload, 9 seconds longer than last time:

🔗 Connected to the Network                                                                                             Chunking 1 files...
Input was split into 22 chunks
Will now attempt to upload them...
Uploaded 22 chunks in 54 seconds
**************************************
*          Payment Details           *
**************************************
Made payment of 0.000062890 for 22 chunks
New wallet balance: 199.999871483
**************************************
*            Verification            *
**************************************
22 chunks to be checked and repaid if required
Verified 22 chunks in 6.506161s
Verification complete: all chunks paid and stored
**************************************
*          Uploaded Files            *
**************************************
Uploaded 1677654404651_0.mp4 to 9b69ce01fdd7c47848249f5effce737a19f58414a9abad8ea26b93e2f9c4e8c9

Downloading the same file to the selected directory was instead 1 sec faster than the previous download to the default location.

🔗 Connected to the Network
Downloading E:\gggg\Videos\1677654404651_0.mp4 from 9b69ce01fdd7c47848249f5effce737a19f58414a9abad8ea26b93e2f9c4e8c9
(...)
Client downloaded file in 6.2596227s
Saved E:\gggg\Videos\1677654404651_0.mp4 at E:\gggg\Videos\1677654404651_0.mp4

Alrighty, using client *.47 I retried the downloading and I did get a few more of the intermediate sized files, but not all and the large one still also failed again.

So slightly better, but not sure if that’s just noise or not in the end.

2 Likes

I’m still in the middle of big upload, and I don’t want to risk it by possibly messing something with a download in another window. But I’ll try to DL your files when this is ready. It may take some time, though. From the original ~8000 chunks 2300 are under repayment, going at 1600 at the moment. Then verification and maybe another round of repayments etc.

Unfortunately the only place in my home where I can connect my laptop via cable is next to bedroom, and this uploading makes my machine’s fan’s noisy, so I had to switch for wifi. And that of course went down during the night.

When upload is done, then I’ll update my client and do some downloading.

Both my node have chunks now, 177 and 221. :white_check_mark:

1 Like

Yeah, a change to this worth to be considered.
However as mentioned in the other slack discussion, this could be tricky.
Need some further discussion for sure.

That error shall be OK as long as there is only one node to be complained.
And it shall now be removed with the latest client.

Because for each chunk there will be 5 copies to be sent out, then another download verification will incur at least 5 more copies to be fetched back.
Plus the other trafic for payments, so for a 27GB file, the total traffic does take over 10 times the file size.

This inactivity is mainly just refer to network communication. meanwhile the chunking is a local activity.
It’s just the large sized 27GB taking too much time, meanwhile no other network communication happens.

10 Likes

I’m not 100% sure of this observation, but it seems to me, that smaller files (maybe up to few hundred MB) are much faster per MB than bigger ones. When I upload smaller files, the burst of high CPU activity is followed by longish period of steady upload speed with good MB/s. But when I upload bigger files, the CPU bursts are much longer, and followed by much poorer upload bursts. This is with same batch sizes.

But, as I said, this just a hunch, as there has been so much problems with bigger uploads, that I haven’t been able to observe that much.

3 Likes

Interesting. I haven’t noticed as I don’t stick around to watch the larger files upload.

Those messages are happening with the latest client, v0.83.47, unless there another since that?

A five times overhead in data use for upload isn’t going to look good from a user’s perspective, although I think it makes sense to leave this with the uploader rather than try and push it out to the network. That’s a tricky one.

Thanks for commenting, and for your excellent work over all threse years!

7 Likes