I have been battling to get this 1.5GB ubuntu.iso uploaded for nearly 24 hours.
I tried all kinds of batch sizes but it will always eventually fail with,
Error:
0: Transfer Error Failed to send tokens due to Network Error Record was not found locally..
1: Failed to send tokens due to Network Error Record was not found locally.
So I went all chunks at once, it finally completed. took 4+ hours but,
Verification complete: all chunks paid and stored
**************************************
* Uploaded Files *
**************************************
Uploaded ubuntu.iso to 3fb74b0c141f57f2028efee188c26e34d9b29e1b752782129fcfe4e73e5a19d1
Could it be that batching is causing the error or did I just get lucky?
That said, it completed, claims that it verified all chunks are paid and stored but when I try to get it… it is missing chunks.
Finally after 51 failed tries an file upload/download
Uploaded Ourwealth.mp4 to d085b50835d313aace0592f9ed127c8ab766126d1f91376fc1ca81224f677b31
Trippy part, both chunk upload and download are 433 chunk, strangely uploaded file was 226.6MB and downloaded file is 361.9MB and doesn’t playback in VLC
I tried a download of it and errored out after 2 mins:-
Client (read all) download progress 1/433
Error downloading "Ourwealth.mp4": Chunks error Chunk could not be retrieved from the network: 1c138d(00011100)...
real 2m0.200s
user 0m32.160s
sys 0m14.675s
I tried again and it errored with a different chunk.
Downloading Ourwealth.mp4 from d085b50835d313aace0592f9ed127c8ab766126d1f91376fc1ca81224f677b31
Error downloading "Ourwealth.mp4": Chunks error Chunk could not be retrieved from the network: 2d79fb(00101101)...
real 1m37.812s
user 0m30.125s
sys 0m11.344s
No Uploading. That error happened with every upload attempt where the batch size was less than total chunks to be uploaded.
The only uploaded that succeeded without that error was the one where batch size was equal to all chunks.
That could imply that doing more than one transfer for the chunks is the problematic portion here. I’d wager it was the transfer PUT that failed there?
Idk, it is interesting as I tried many times and every batched attempt eventually failed.
When uploading all chunks at once ~2850 the first check returned ~1700 missing chunks but it kept cycling through perhaps 5 or 6 cycles untill all chunks were confirmed paid and uploaded.
It never errored out.
But alas when trying to download it, it was missing chunks.
There are tweaks that can be made, but they should clean up as they are used and dropped. I suspect the clear up is not happening correctly. Need to check more on libp2p there
From some deeper heaptracking, I think it’s some calls around starting a connection… I think we may be dong too much work, potentially around the “lost nodes” … I wonder if that work would still be needed with the much more complete network-contact file startup we have now.
I thought that if you upload the same file again and again, it would get deduplicated. But that’s not the case? Or is that a result of encryption being broken?
I haven’t participated in the last few testnets… partially due to a lot of existing commitments to the daytime job. Either way, I have read every comment and thread here in the background… .
I see the up’s and down’s, but I am confident it will all be sorted out in due time.
To the community, keep up the great work! Its excellent feedback for Maidsafe, and the team!!
To Maidsafe, loving the frequent testnets, never enough…it will be ready when it will be, till then keep it going!! Definitely not a deal breaker now, and it never was to begin with!!!
It does. It seems the download is where we’re having problems.
I’m still not sure what’s up. Seems like a successful download is fine, and can be repeated. BUt if it fails, something is going on, and we end up pulling in more data that there should be (presumably the prior failed runs).
We’ll be digging in there today. We have a failure case at least!
Okay, the issue is somewhat simpler it seems. I have successfully dlded the video.
A key part: After a failed attempt (or even successful one), erase the existing download. It seems like we’re appending to the content there. (I’m not sure why only after failure. If a file exists we can re dl fine w/o corruption )
Anyway, getting closer to the main issues there!
edit:
It’s because we’re now Streaming decryption. And so appending new unencrypted parts to a file (improves client mem massively for large files; makes them feasible). But clearly we are a) not cleaning up after failed dlds and b) not removing a file before starting!