I’ve just done some more testing, including some video playing and I noticed an import/curious difference.
When I stream a video trough VLC with the existing streamer, it starts the video, fills a buffer, then provides back pressure to AntTP. Watching system monitor, the bandwidth used plummets to the odd spike. So, it looks like it is just providing data as needed.
However, with the new streamer, the bandwidth remains pinned, then after a minute or so, it errors out. I suspect this may be because the client can’t cope with the quantity of data.
2025-06-13T10:24:46Z INFO autonomi::client::data_types::chunk] Getting chunk: f43765ae5ec33b1e8d20de3c1eab993a12ba8f9d7d216da44f679d465322383c
[2025-06-13T10:24:46Z INFO anttp::service::chunk_stream] Start chunk download 15 off=0 len=4194304
[2025-06-13T10:24:46Z INFO autonomi::client::data_types::chunk] Getting chunk: 97496ecfb94ff8e5c19c28edce0a58bf3176cfd247462b0f77f16e99e0d72b62
[2025-06-13T10:24:46Z INFO anttp::service::chunk_stream] Start chunk download 16 off=0 len=4194304
[2025-06-13T10:24:46Z INFO autonomi::client::data_types::chunk] Getting chunk: 4299a926e9323e0579180ea434002a921af5b80e09d6163f285e91e0bfbb4658
[2025-06-13T10:24:48Z ERROR autonomi::client::data_types::chunk] Error fetching chunk: InternalMsgChannelDropped
[2025-06-13T10:24:48Z ERROR anttp::service::chunk_stream] chunk_get error for idx 8: Network(InternalMsgChannelDropped)
[2025-06-13T10:24:48Z ERROR actix_http::h1::dispatcher] Response payload stream error: "decrypt"
I tried dropping from 16 to 8 threads, but the same thing happens, maybe after a longer period of time though.
It also seem to take 2-4x the time to start the video. VLC does tend to grab chunks at the beginning and the end before starting the stream. There is also a large delay when skipping to another point in the video. Likewise, sometimes it seems to get stuck on a frame, despite the chunk downloads happening
So, it looks like there is a difference between the way the two routines respond to back pressure.
It looks like the existing implementation pivots from one request to another with more agility, i.e. it stops and restarts more quickly and is keener to back off when the client doesn’t want more data.
So, new implementation is great for big files - about 2x as fast with 16 threads at least - but for streaming where the client doesn’t want all the data as max speed, there seem to be some issues.
My first thoughts are that it relates to the implementation of the future::stream. The existing streamer implements the poller, which then returns some data. If the connection slows/drops, the poller just isn’t called and data fetching ceases. However, it doesn’t look like this was happening with the new streamer.
Maybe a hybrid of the two would give the best of both worlds? Until then, I’ll stick with the existing one for stability, but this is definitely a big area of performance that could be tapped, if we can refine it.