With 5773 chunks, there is like up to 28,865 nodes with a chunk of the file on them
Seeing as the network is only 90,000 nodes then there is the chance that up to 1/3 of the nodes has at least one chunk. And any large node runner with 2000 nodes has a high change of having all 5 close nodes in their “farm” for at least one chunk. If they remove their nodes/restart them/etc then bye bye chunk
Would it be possible somehow to avoid chunk copies being stored on the same IP, if nodes can do a local search for other nodes, or if there are other ways?
Isn’t this a significant issue that might need some solution? @dirvine
Three downloads on different boxes (2 cloud, one home) failed but with different chunks this time, one on the second last chunk
Chunks error Chunk could not be retrieved from the network: 529e72(01010010)…
Chunks error Chunk could not be retrieved from the network: 529e72(01010010)…
Chunks error Chunk could not be retrieved from the network: e1b28c(11100001)…
So it sounds like the issue is that the network knows that the chunk is supposed to be available because it’s been uploaded before. That’s great. But there is no checking that the chunk is actually retrievable. That’s a condition that needs to be accounted for - even if the condition is supposed to be avoided. That will be difficult. It would mean having to do a read verification of the readability on every chunk for an upload on the reupload. Maybe there would have to be a way of marking a chunk as suspect and requesting a reupload.
One alternative way to help is for safenode manager to reject any node that spins up with its xor address too close to others already running. If it assumes a decently sized network then close nodes are quite close in xor space compared with a random generation of xor address (well random secretkey generation giving a xor address)