So sn_httpd is good
I’ve been able to max out at around a 148 MB/s running two processes on ports 8080 and 8081, and 16 cores and around 16GB mem - it’s now hitting a limit on downloads which I’m assuming is a cap on the nodes ability to deliver data. My understanding with a max 4MB chunk size, is only served from 1 node, instead of serving multiple segments of the chunk from all the nodes holding it in parallel (I’m sure that will come in the future), so with the max replication of 5 nodes per chunk we do have a max throughput cap on chunk retrieval. Will be great to see in the future hot nodes, where they store more replicated copies of the chunk to facilitate greater parallel downloads, and ability to stream a chunk.
Anyway, had a few hours spare to play with K6 more, i’m not gona be sucked into learning another language
it’s really good though
the JavaScript interface has allowed me to code very basic logic. It’s now able to pass my “csv” file of XOR addresses, and filenames on the network, and then use a random selection of those, as a virtual user (VU) via sn_httpd, with a monitor on files returning incorrect sizes.
Requirements are :
*sn_httpd running localy, I’ve got it on 127.0.0.1:8080
*K6 installed
*A file called “data.csv” in the same directory as the .JS K6 script, containing all the files to test from my github test file.
k6-ant-runner.js if you dislike github
import { sleep } from 'k6';
import http from 'k6/http';
import { SharedArray } from 'k6/data';
import { check, fail } from 'k6';
const csvData = new SharedArray('data', function () {
return open('./data.csv').split('\n').filter(line => !line.startsWith('#') && line.trim() !== '').slice(1);
});
// set this to sn_httpd instance
const SERVER = "http://localhost:8080/";
function getRandomRow() {
const randomIndex = Math.floor(Math.random() * csvData.length);
return csvData[randomIndex].split(',');
}
export default function () {
try {
const row = getRandomRow();
const name = row[1];
const address = row[2];
const url = `${SERVER}${address}/${name}`;
const start = new Date().getTime();
const response = http.get(url);
const end = new Date().getTime();
const duration = end - start;
const downloadSize = parseInt(response.headers['Content-Length']) || 0;
if (downloadSize < 1024) {
console.error(`Name: ${name}, Duration: ${duration} ms, Download Size: ${downloadSize} bytes - Error: Download size is less than 1KB`);
response.status = 501;
}
if (response.status !== 200 && response.status !== 501) {
throw new Error(`Request failed with status: ${response.status}`);
}
check(response, {
'status is 200': (r) => r.status === 200,
});
sleep(1);
} catch (error) {
console.error(`Error: ${error.message}`);
}
}
load is 10 users, 1000 iterations total
running with ./k6 run -u 10 -i 1000 ./k6-ant-runner.js
jadkins@dev03:~/sn_httpd/k6-v0.56.0-linux# ./k6 run -u 10 -i 1000 ./k6-ant-runner.js
/\ Grafana /‾‾/
/\ / \ |\ __ / /
/ \/ \ | |/ / / ‾‾\
/ \ | ( | (‾) |
/ __________ \ |_|\_\ \_____/
execution: local
script: ./k6-ant-runner.js
output: -
scenarios: (100.00%) 1 scenario, 10 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1000 iterations shared among 10 VUs (maxDuration: 10m0s, gracefulStop: 30s)
ERRO[0002] Name: chume17a.jpg, Duration: 2571 ms, Download Size: 88 bytes - Error: Download size is less than 1KB source=console
ERRO[0003] Name: CEP474.mpg, Duration: 3752 ms, Download Size: 81 bytes - Error: Download size is less than 1KB source=console
*** More download errors...
✗ status is 200
↳ 70% — ✓ 442 / ✗ 184
checks.........................: 70.60% 442 out of 626
data_received..................: 4.5 GB 7.4 MB/s
data_sent......................: 101 kB 165 B/s
dropped_iterations.............: 374 0.613676/s
http_req_blocked...............: avg=14.94µs min=3.84µs med=6.52µs max=542.44µs p(90)=7.92µs p(95)=26.56µs
http_req_connecting............: avg=3.9µs min=0s med=0s max=467.52µs p(90)=0s p(95)=0s
http_req_duration..............: avg=8.68s min=1.2s med=7.18s max=30.11s p(90)=17.18s p(95)=20s
{ expected_response:true }...: avg=8.68s min=1.2s med=7.18s max=30.11s p(90)=17.18s p(95)=20s
http_req_failed................: 0.00% 0 out of 626
http_req_receiving.............: avg=6.54ms min=18.36µs med=2.15ms max=262.91ms p(90)=15.28ms p(95)=22.72ms
http_req_sending...............: avg=36.27µs min=12.36µs med=29.62µs max=190.2µs p(90)=59.52µs p(95)=81.33µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=8.67s min=1.2s med=7.18s max=30.08s p(90)=17.17s p(95)=19.99s
http_reqs......................: 626 1.027169/s
iteration_duration.............: avg=9.68s min=2.2s med=8.18s max=31.12s p(90)=18.18s p(95)=21s
iterations.....................: 626 1.027169/s
vus............................: 2 min=2 max=10
vus_max........................: 10 min=10 max=10
Response time, as expected, is significantly better then the default “ant” client, but as I’ve seen before there are intermittent issues retrieving the same or different chunks from the network on download.
with load is 200 users, 1000 iterations total
running with ./k6 run -u 200 -i 1000 ./jadkins-vader.js
jadkins@dev03:~/sn_httpd/k6-v0.56.0-linux# ./k6 run -u 200 -i 1000 ./jadkin-vader.js
/\ Grafana /‾‾/
/\ / \ |\ __ / /
/ \/ \ | |/ / / ‾‾\
/ \ | ( | (‾) |
/ __________ \ |_|\_\ \_____/
execution: local
script: ./jadkin-vader.js
output: -
scenarios: (100.00%) 1 scenario, 200 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1000 iterations shared among 200 VUs (maxDuration: 10m0s, gracefulStop: 30s)
data_received..................: 49 GB 148 MB/s
data_sent......................: 837 kB 3.4 kB/s
http_req_blocked...............: avg=4.72ms min=1.28µs med=6.6µs max=92.01ms p(90)=372.77µs p(95)=68.93ms
http_req_connecting............: avg=4.16ms min=0s med=0s max=89.83ms p(90)=86.96µs p(95)=65.97ms
http_req_duration..............: avg=16.34s min=361.21ms med=11.83s max=1m0s p(90)=38.68s p(95)=44.53s
{ expected_response:true }...: avg=15.99s min=361.21ms med=11.83s max=59.88s p(90)=37.85s p(95)=43.45s
http_req_failed................: 8.60% 172 out of 2000
http_req_receiving.............: avg=1.85s min=0s med=1.12ms max=39.6s p(90)=7.04s p(95)=14.93s
http_req_sending...............: avg=1.08s min=5.12µs med=20.92µs max=24.26s p(90)=4.24s p(95)=6.15s
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=13.39s min=361.15ms med=9.33s max=1m0s p(90)=28.71s p(95)=36.87s
http_reqs......................: 2000 16.336018/s
iteration_duration.............: avg=1m5s min=19.31s med=1m7s max=2m7s p(90)=1m27s p(95)=1m39s
iterations.....................: 1000 4.084005/s
vus............................: 4 min=4 max=200
vus_max........................: 200 min=200 max=200
Seems the speed is there 148 MB/s, if the node you are connected to has good upstream - The concurrency, as expected directly impacts the download speed on the same file - theoretically this will be directly impacted by the concurrency of requests, and given the way de-dupe has been implemented, we are going to see some concurrency issues without developers heavy caching between the user and the network - as one chunk, could be dedupe over 1, 10, 100’s of files when requested, causing that chunk to be inaccessible.
If it’s a poor connection it’s very easy to identify due to latency and response. In the future it would be great to see the closed group, perform a consensus download check on the group, while also being able to request outside the group, so that slow nodes can be shunned on distributed consensus.
Be interesting if others want to take sn_httpd and K6 for a spin and see what others can achieve 