I’ve pushed a new release v0.4.4, which mostly includes improvements to downloads.
- Adds channels for downloading files with more control over threading.
- Allows number of chunk_download_threads to be specified from the command line (default = 32).
- Allows large files to be downloaded without consuming large amounts of memory and causing networking issues.
- Reduces latency from Autonomi API downloads to client streaming.
- Adds data streaming to both range and regular requests to reduce latency between autonomi requests and sending data to the client/browser.
- Improves content type / mime support to extract type from extension.
- Fixes bug in chunk_service.rs where the last chunk was erroring. This was especially noticeable on smaller files that needed 3 chunks.
- Fixes issue with default content type (text/plain) causing issues with rendering on some browsers.
Github Release: Release v0.4.4 · traktion/AntTP · GitHub
Dockerhub Release: https://hub.docker.com/r/traktion/anttp
The default download threads of 32 is similar to the old CLI chunk size setting. Making it bigger allows more parallelism when retrieving chunks, but it is tailored for piping through to the web response. So, as each chunk comes down from the Autonomi network, it is shunted straight to the client/browser.
So, this version unlocks large downloads (e.g. ISOs) and huge videos. The data for both gets streamed as directly as possible to the browser, without consuming huge amounts of memory or unnecessary bandwidth.
Tuning the chunk download threads will be interesting to experiment with. Going with something more aggressive (e.g. 64) gives a substantial boost to larger files, such as ISOs or 4K movies and such.
Here is an example single client request to our old friend Mint ISO with 64 chunk download threads on my home wifi:
$ cat src/localhost-vlarge-autonomi-http.js; k6 run -u 1 -i 1000 src/localhost-vlarge-autonomi-http.js
import http from 'k6/http';
export default function () {
http.get('http://localhost:8080/e7bb1b87c1f0e07cdb76ba5e82a425a8da712940c2d3553aa6791494e92aa54d/ubuntu-16.04.6-desktop-i386.iso', { timeout: '600s' });
}
/\ Grafana /‾‾/
/\ / \ |\ __ / /
/ \/ \ | |/ / / ‾‾\
/ \ | ( | (‾) |
/ __________ \ |_|\_\ \_____/
execution: local
script: src/localhost-vlarge-autonomi-http.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1000 iterations shared among 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
data_received..................: 2.0 GB 3.2 MB/s
data_sent......................: 352 B 0.5586673524791285 B/s
dropped_iterations.............: 998 1.583949/s
http_req_blocked...............: avg=193.92µs min=193.92µs med=193.92µs max=193.92µs p(90)=193.92µs p(95)=193.92µs
http_req_connecting............: avg=110.18µs min=110.18µs med=110.18µs max=110.18µs p(90)=110.18µs p(95)=110.18µs
http_req_duration..............: avg=8m32s min=8m32s med=8m32s max=8m32s p(90)=8m32s p(95)=8m32s
{ expected_response:true }...: avg=8m32s min=8m32s med=8m32s max=8m32s p(90)=8m32s p(95)=8m32s
http_req_failed................: 0.00% 0 out of 1
http_req_receiving.............: avg=8m32s min=8m32s med=8m32s max=8m32s p(90)=8m32s p(95)=8m32s
http_req_sending...............: avg=66.98µs min=66.98µs med=66.98µs max=66.98µs p(90)=66.98µs p(95)=66.98µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=413.91ms min=413.91ms med=413.91ms max=413.91ms p(90)=413.91ms p(95)=413.91ms
http_reqs......................: 1 0.001587/s
iteration_duration.............: avg=8m32s min=8m32s med=8m32s max=8m32s p(90)=8m32s p(95)=8m32s
iterations.....................: 1 0.001587/s
vus............................: 1 min=1 max=1
vus_max........................: 1 min=1 max=1
running (10m30.1s), 0/1 VUs, 1 complete and 1 interrupted iterations
default ✗ [--------------------------------------] 1 VUs 10m30.0s/10m0s 0001/1000 shared iters
That’s about 25 mbit/s, which is pretty decent for a single client. Enough for Netflix 4K at least!