This keeps on repeating, already three times:
"Failed to fetch file \"ubuntu-16.04.6-desktop-i386.iso\": General networking error: GetRecordError(RecordNotFound)",
This keeps on repeating, already three times:
"Failed to fetch file \"ubuntu-16.04.6-desktop-i386.iso\": General networking error: GetRecordError(RecordNotFound)",
Hmm, seems ok for me:
$ time ant file download e7bb1b87c1f0e07cdb76ba5e82a425a8da712940c2d3553aa6791494e92aa54d ubuntu-16.04.6-desktop-i386.iso
Logging to directory: "/home/autonomi/.local/share/autonomi/client/logs/log_2025-04-03_19-57-18"
🔗 Connected to the Network Fetching file: "ubuntu-16.04.6-desktop-i386.iso"...
⠐ [00:00:39] [----------------------------------------] 0/1 Successfully downloaded data at: e7bb1b87c1f0e07cdb76ba5e82a425a8da712940c2d3553aa6791494e92aa54d
real 2m31.459s
user 2m1.294s
sys 1m5.617s
Edit: 1.6GB in 2m 30s is pretty impressive too!
about 70mbit/s real time.
Might be my router… Would be cool to be able to limit the thing that is overloading it with a download of this size.
wow, more surprised how @aatonnomicc managed to upload such a large file?
was it done with the new release or sometime earler?
and how many re-attempts experienced ?
was done a while ago posibley 2 weeks ago and it was done over many many many attempts ![]()
although iv never managed to brute force Linux mint @ 3gb onto the network.
edit ---------------
i uploaded it on 27/02/2025 with what ever release was latest at that point if info is of any use i think it actually went up in one go
I forgot to time this but I’m guessing I got fairly similar performance.
This is most encouraging.
oh, might be with the 1.2.6 release.
must be super lucky to got it through in just one go.
![]()
id actually forgotten about this upload glad @Toivo brought it back to our attention ![]()
iv got a cloud oracle vps that I only use for uploading and downloading no nodes on it so if anything has a good chance its that vps ![]()
ah, right, yeah, a super good connection service does prove will affect the upload success rate a lot..
and no node runners use oracle as its expensive so its in a super good spot and no nodes running in the same data centre its my lucky vps ![]()
I’m still not able to download it.
And in fact it seems that my repeated trials jammed our connection so, that our LAN (wifi and ethernet) got totally jammed because of it. I downloaded over ethernet. I had to reboot the router. It’s not top notch router, but has been working fine for your typical home use, watching videos with a couple devices, video calls etc.
I don’t think this is a top priority issue at the moment, but something needs to be done to it at some point. It’s going to affect much larger audience than those running nodes.
Is it maybe because the large files have so many nodes holding chunks, that the amount of connections is just too much for common consumer routers?
I recognized that on my smallish development-vps with only ~3GB of spare memory the system kills the process at some point because RAM is becoming sparce not because the data wouldn’t arrive
very nice to see the network improving (in spite of the scale) ![]()
Looking good from here too Maidsafe folk. nice work !!! ![]()
Flipped the switch from Notwork to Network.
There is an env setting of CHUNK_DOWNLOAD_BATCH_SIZE , which you can try set it to a lower number, say 8 or even 4, to see if download will become smoother.
I’ll have to double down on that. I tried downloading it — everything looked fine for the first few minutes, downloading at 3 Mb/s — and then my internet connection suddenly shut down. And now it seems I can’t recover internet after resetting my router. Don’t try this at home folks !
![]()
Ok that should probably be the default then (?)
EDIT: router not dead, I recovered internet. Ok let’s try again ![]()
the default value is system::thread_parrallism (normally num_of_cpu) * 8.
Ive had some similar issues in the past, where it has taken out my wifi.
I wonder whether using CPU cores is just too aggressive for some home networks?
@home: my MacBook has 14 cores * 8 == 112 parallel download streams … behind one crappy router and a rather basic internet connection …
@dev_vps: 4 cores == 32 parallel stream on a 1gbit up and down connection …
not sure I need to say anymore … the parallel downloads should imho not be tied to the number of processors … the 2 things have nothing to do with each other imho … but good to know there is an env variable for it ![]()
It probably makes sense as an upper limit, assuming there is CPU work to do for decryption etc. Just a lower limit, based on network performance, is needed too.
Perhaps a conservative default should be used, which can then be overridden by env variable/parameter/config.