Why exactly are we refusing to test smaller chunk sizes?
For all I think I’ve seen the 4mb chunks even cause issues client side when not having the best Internet connection at hand…(==close to city but suburbish German fully wired land line Internet… This is not sub standard in Germany I think… Which should be pretty good in a global comparison I guess… )
The reason we didn’t see that more often is the missing API and the lack of community uploads/lack of data being utilised from the network … What precisely is the benefit of giga chunks and why would performances increase with them/what’s the upside of them…?
Since cost per chunk (no matter if it’s a 1kb chunk or a 4mb chunk) clients will optimise for maximum chunk utilisation… e.g. Embedding pictures as inline base64 encoded images within a website… Instead of uploading the picture as file and referencing it from the website… At the same time pushing really transmitted chunk size (and the issues we see with those large chunks) up and defeating data deduplication
(and just btw that’s not theory… I already did that with my awe websites in the beta phase to save nanos on uploads)