AntTP - Serving Autonomi data over HTTP

Great question! No (but it probably should be an option!), but if you don’t provide a private wallet key, it will fail anyway.

EDIT: I will add a proper config toggle for this in a future release too.

4 Likes

downloaded anttp and am trying to run on mint

2025-04-03T19:14:18Z INFO  anttp::anttp_config] Bind address [127.0.0.1:6667]
[2025-04-03T19:14:18Z INFO  anttp::anttp_config] Static file directory: [static]
[2025-04-03T19:14:18Z INFO  anttp::anttp_config] Wallet private key: [*****]
[2025-04-03T19:14:19Z INFO  libp2p_swarm] local_peer_id=12D3KooWCQL8fBApKqq3Kt56rC8VsUU9trDdHmwdoKYxfTRrvQWT
[2025-04-03T19:14:21Z INFO  anttp] Starting listener
[2025-04-03T19:14:21Z INFO  actix_server::builder] starting 16 workers
[2025-04-03T19:14:21Z INFO  actix_server::server] Actix runtime found; starting in Actix runtime
[2025-04-03T19:14:21Z INFO  actix_server::server] starting service: "actix-web-service-127.0.0.1:6667", workers: 16, listening on: 127.0.0.1:6667

but i cant listen to beg blag :frowning:

(http://127.0.0.1:6667/a0f6fa2b08e868060fe6e57018e3f73294821feaf3fdcf9cd636ac3d11e7e2ac/BegBlag.mp3)

Hopefully when disabled, AntTP will reject uploads rather than receive them?

I think about DoS issues (intentional or otherwise).

[Edit]
I’m getting greedy now. How about a max upload size option?

2 Likes

Yeah, i followed your thoughts too! Ha! The switch will fully disable the upload endpoint.

Max would be good too! I have a task to add a conf file and/or proper named parameters to make it easier to setup.

Edit: Btw, you can comment out the endpoint and recompile if you need it asap.

Not at laptop, but commenting this line should do it: AntTP/src/main.rs at 80f0231a176c4b0585cca4e4676341e743b1f0d9 · traktion/AntTP · GitHub

2 Likes

Do you also use some cache, or is it AntTP caching? Because this loads so fast…

2 Likes

You can look at Gnu Name System, which is an example of a DNS in distributed setting.

2 Likes

There are a few caching processes going on with AntTP.

At the server side, AntTP will cache the archive files indefinitely. They are immutable, so we know they cannot change. They are only small and contain the map from filename to XOR address of the file in the URL, so it results in only one lookup for subsequent requests of files in an archive.

At the browser side, AntTP will use aggressive caching response headers to instruct the browser to cache data. Each response contains an eTag header, which is the XOR of the file stored on the network. As this is immutable, it never changes, so the browser just asks the server ‘has this changed’ and the response will always be ‘no’ if the browser has retrieved the file before.

On top of that, the CacheControl header which will instruct the browser to cache immutable data forever. This can avoid the browser even asking the server whether the cached file has changed. However, it depends on the browser and its state as to whether it ignores this. When this header is respected, the browser doesn’t need to even talk to AntTP to know to use the same data again.

Essentially, immutable data is awesome for caching. You can cache it to death. This includes JS, CSS, HTML, fonts, etc, so a web app will essentially load instantly after the first read.

IMO, this is a great reason for using web apps. Browsers and HTTP servers have excellent caching models, which work really well with immutable data. While this can be duplicated at the individual app level, we get this out of the box with browser based apps. Likewise, the client libs could do more of this too in the future, but we have a head start with web apps.

3 Likes

Thanks - I’ll take a look.

I want to keep name lookup out of AntTP, but allow other solutions to be integrated (either through libraries or other app dependencies). I messed with doing both with AntTP in the past (mostly out of curiosity), but it feels like a separate problem space.

EDIT: btw, I did ponder a DNS resolver which uses Autonomi as a hostname/zone store. It could either talk DNS (port 53 etc) or the lookups could be direct against Autonomi.

However, I don’t think the challenge is the technology to store/resolve names in, it’s how they are picked/owned on a distributed system. Pet names side steps all that, which is convenient.

3 Likes

Agreed, I do something similar with Archives including each version of a site but know this could be improved. I’ve also not bothered with browser based caching yet, but it’s easy to do because of content addressing as you say.

Feels like the network is wriggling out of a straightjacket!

8 Likes

I’ve pushed a new release v0.4.4, which mostly includes improvements to downloads.

  • Adds channels for downloading files with more control over threading.
  • Allows number of chunk_download_threads to be specified from the command line (default = 32).
  • Allows large files to be downloaded without consuming large amounts of memory and causing networking issues.
  • Reduces latency from Autonomi API downloads to client streaming.
  • Adds data streaming to both range and regular requests to reduce latency between autonomi requests and sending data to the client/browser.
  • Improves content type / mime support to extract type from extension.
  • Fixes bug in chunk_service.rs where the last chunk was erroring. This was especially noticeable on smaller files that needed 3 chunks.
  • Fixes issue with default content type (text/plain) causing issues with rendering on some browsers.

Github Release: Release v0.4.4 · traktion/AntTP · GitHub
Dockerhub Release: https://hub.docker.com/r/traktion/anttp

The default download threads of 32 is similar to the old CLI chunk size setting. Making it bigger allows more parallelism when retrieving chunks, but it is tailored for piping through to the web response. So, as each chunk comes down from the Autonomi network, it is shunted straight to the client/browser.

So, this version unlocks large downloads (e.g. ISOs) and huge videos. The data for both gets streamed as directly as possible to the browser, without consuming huge amounts of memory or unnecessary bandwidth.

Tuning the chunk download threads will be interesting to experiment with. Going with something more aggressive (e.g. 64) gives a substantial boost to larger files, such as ISOs or 4K movies and such.

Here is an example single client request to our old friend Mint ISO with 64 chunk download threads on my home wifi:

$ cat src/localhost-vlarge-autonomi-http.js; k6 run -u 1 -i 1000 src/localhost-vlarge-autonomi-http.js 
import http from 'k6/http';

export default function () {
  http.get('http://localhost:8080/e7bb1b87c1f0e07cdb76ba5e82a425a8da712940c2d3553aa6791494e92aa54d/ubuntu-16.04.6-desktop-i386.iso', { timeout: '600s' });
}

         /\      Grafana   /‾‾/  
    /\  /  \     |\  __   /  /   
   /  \/    \    | |/ /  /   ‾‾\ 
  /          \   |   (  |  (‾)  |
 / __________ \  |_|\_\  \_____/ 

     execution: local
        script: src/localhost-vlarge-autonomi-http.js
        output: -

     scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
              * default: 1000 iterations shared among 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)


     data_received..................: 2.0 GB 3.2 MB/s
     data_sent......................: 352 B  0.5586673524791285 B/s
     dropped_iterations.............: 998    1.583949/s
     http_req_blocked...............: avg=193.92µs min=193.92µs med=193.92µs max=193.92µs p(90)=193.92µs p(95)=193.92µs
     http_req_connecting............: avg=110.18µs min=110.18µs med=110.18µs max=110.18µs p(90)=110.18µs p(95)=110.18µs
     http_req_duration..............: avg=8m32s    min=8m32s    med=8m32s    max=8m32s    p(90)=8m32s    p(95)=8m32s   
       { expected_response:true }...: avg=8m32s    min=8m32s    med=8m32s    max=8m32s    p(90)=8m32s    p(95)=8m32s   
     http_req_failed................: 0.00%  0 out of 1
     http_req_receiving.............: avg=8m32s    min=8m32s    med=8m32s    max=8m32s    p(90)=8m32s    p(95)=8m32s   
     http_req_sending...............: avg=66.98µs  min=66.98µs  med=66.98µs  max=66.98µs  p(90)=66.98µs  p(95)=66.98µs 
     http_req_tls_handshaking.......: avg=0s       min=0s       med=0s       max=0s       p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=413.91ms min=413.91ms med=413.91ms max=413.91ms p(90)=413.91ms p(95)=413.91ms
     http_reqs......................: 1      0.001587/s
     iteration_duration.............: avg=8m32s    min=8m32s    med=8m32s    max=8m32s    p(90)=8m32s    p(95)=8m32s   
     iterations.....................: 1      0.001587/s
     vus............................: 1      min=1                  max=1
     vus_max........................: 1      min=1                  max=1


running (10m30.1s), 0/1 VUs, 1 complete and 1 interrupted iterations
default ✗ [--------------------------------------] 1 VUs  10m30.0s/10m0s  0001/1000 shared iters

That’s about 25 mbit/s, which is pretty decent for a single client. Enough for Netflix 4K at least! :sweat_smile:

12 Likes

@Traktion amazing to see what’s in your update today. Sounds like you’re getting the backend into shape.

Meanwhile, I was also thinking that while we’re trying to make things work better in apps, some of these ‘fixes’ should really be in the Autonomi Rust APIs rather than in third party apps and libs. What do you think about that - how feasible is it to push some of the putting/getting improvements into Autonomi?

So far I’ve only added the retries feature for all mutable API calls (as a configurable option in my wrapper for the Client). This is also selectable in the REST API as a URL parameter but would I believe be better in the Autonomi API (say in Client config). @loziniak thoughts?

Even if the Autonomi network becomes more reliable, recovery from errors remains important because we want apps to work over any connection, and non-wired connections will never be 100%. It’s kind of next level in the Autonomi APIs but worth considering now I think.

6 Likes

Great idea. I think we are returning to the idea of a community API from a year ago :slight_smile: I could add to this the streaming capability. Maybe safeapi would be ok as a basis?


Check out the Dev Forum

5 Likes

Yes, that would be great! It was on my API wish list, but the team is flat out elsewhere, so I figured I’d have a go first, refine, them create a PR to add to the Autonomi API when ready.

It should be feasible, but I need to get my head around the stream macro and the associated sender/receiver can be refactored to make it all fit. It’s all just a bit tightly coupled atm, but it’s still early days… I’ll get it figured out in due course! :sweat_smile:

6 Likes

If you’re talking about Rust, I think Read/Write API is much easier to implement/grasp/use. I’d rather go for it, as it seems it can be used as a basis for Stream api, but can also be used without it.

3 Likes

Thanks, I’ll take a look. It has to be compatible with Actix responses too, so I’m not sure how all that fits together.

It feels like I have a bunch of things that fit together, but understanding each in detail and refining them on the basis of this is still WIP. It’s a conflict between adding more features vs refining existing ones too.

It’s all good fun though and I’m looking forward to having dedicated time on this from the end of the month.

2 Likes

I came across streaming in Actix for the multipart form upload of file/files and think it uses read/write. Look at their examples or here in dweb.

3 Likes

Looks like the futures::stream trait needs implementing. I think the stream macro does some clever stuff to make this ‘easier’, but it feels like it couples the code more closely.

I may have a go at implementing the trait and seeing if that plays more nicely with the rest of the code. I should then be able to refactor it into something more re-usable.

1 Like

Hmm maybe it’s not so simple… I didn’t find any library that would turn Read/Write into Stream. So perhaps that’s a dead end :slight_smile:

1 Like

I’ve done some refactoring and implemented the stream trait and it’s looking much better now. I want to improve it a bit more before a formal release, but the code is looking far less coupled and easier to share now.

10 Likes

Pushed v0.4.5 release, which includes the latest v0.4.4 autonomi libs and refactors to replace stream! macro with an implementation of futures::stream trait instead. This improves readability and maintainability of the code (more to come).

Github: Release v0.4.5 · traktion/AntTP · GitHub
Dockerhub: https://hub.docker.com/r/traktion/anttp

6 Likes