Interesting! That tallies with what I’m seing with K6 too.
Doing the same test run across 2 different boxes, pretty much halves the performance on both (latency doubles and throughput halves, more or less). These weren’t hugely scientific tests (both boxes were busy doing other stuff too), but it was certainly a good indicator.
When I tried to download a 100 MB file on both boxes simultaneously, with 10 concurrent requests on each, I started to get chunk errors. I suspect this is what you are seeing too - it is literally knocking nodes offline, as their CPU increases and they die.
I understand the motivation, but I think we must assume there will be bad actors who want to show how Autonomi can’t scale, how brittle it is, etc. If you or I can easily DoS a chunk, experts in the field certainly can and will.
It’s acceptable while it’s a dev network, ofc. As soon as folks start spending money to push data up to the network, then reliably retrieve it again, this sort of thing is critical though.
Is there a timeline on swarm caching (or whatever we’re calling it), so that peers retain data that has recently been requested? That would immediately require much more sophisticated DoS tactics, as the source node for the chunk would be shielded.