AntTP - Serving Autonomi data over HTTP

Take a look here: Uploads to test with - #20 by aatonnomicc

Hope that helps!

2 Likes

I got it going. So I uploaded a directory with two files, one just a bare bones “Hello World” html file that I named index.html to test when you finish that piece. The second is an .mp4 file. When I click on it in the directory at the link below, instead of playing it in the browser, it simply downloads to my computer.

http://localhost:8080/7cff950957b97d1f9a76fdba707a77375421ce833b7a2b8d52802621a190a01c/

4 Likes

That’s actually a normal browser thing for some media types. If you want to embed it in a page, you need to embed it in a HTML file.

I have a streaming.html page here to demonstrate that I’ve been trying to upload, but my router is on strike! :melting_face:

I can see you hello world though! Nice job!

3 Likes

Embedding with iframe kept resulting in the video being downloaded automatically, but embedding with video is working. I’ll keep playing with it, would love to help you test it!

First embed in this page is with iframe, second with video:
http://localhost:8080/ed2ba777909dc5977bdaf380b4279e619772ab4199a8c28b1256eb02890e8ed9/new/index.html

4 Likes

To give a heads up, there have been a few new releases going out, so I thought I’d give this thread a bump!

We’re now on 0.3.10, which brings a bunch of changes to it:

  • Adds directory listings when only the archive address is provided (but not filename).
  • Adds ETag caching for files, to boost performance. If a browser/client already has the file cached, it will just check the ETag to confirm it is unchanged, then render it.
  • Temporarily updated archive lookup logic to do a brute force search of only the file name. The archive key currently includes the original upload directory, which makes it difficult to resolve directly.
  • Re-enables support for app-conf.json for routingMaps.
  • Adds caching for app-conf.json, as loaded with every request in the archive.
  • Uses latest ant libraries (with PublicArchive etc).
  • Adds js file MIME type support (prevents browser javascript errors)
  • Adds json directory/archive listings
  • Adds ETag caching for directories/archives

The code is still in a state of massive flux, with various features being hacked in with zero unit testing and limited (manual!) regression testing, but it works. Once the API stabilises, I’ll move towards refactoring it all and properly testing it in a sustainable way. For now, it’s more like rapid prototyping (or at least as much as Rust lets you do that sort of thing! ha!).

Binaries can be found here or you can build from source: Release 0.3.10 · traktion/sn_httpd · GitHub

The docker image remains the easiest way to get up and running though and it also points to the default DNS-like naming registers. While these are broken (I can’t edit them anymore from the CLI), it lets me try out IMIM in an Angular friendly way (watch this space!):
https://hub.docker.com/r/traktion/sn_httpd

There is still no streaming support in the current API version and I haven’t dug into that area again yet. So, large audio/video files will be slow to load and uses lots of memory (as it downloads it all in sn_httpd variables, then serves it to the client after… yuck!).

A customary screenshot…


(JSON directory listings, when accept header includes ‘json’ - handy for browser integration with Javascript)

9 Likes

There is a new release of sn_httpd: 0.3.13

Binaries: Release v0.3.13 · traktion/sn_httpd · GitHub
Docker: https://hub.docker.com/r/traktion/sn_httpd

This version primarily improves caching performance, where there was a bug which was causing huge latency for cached data items. In short, live network lookups were being done for non-existent data items, which then (inevitable) returned no data, but choked up the web server.

With the latest fixes, some requests were taking 5-15s to return a 304 (which tells the browser to use its cached version). With this latest version, that is reduced to a handful of milliseconds! It makes loading cached content pretty much instant.

Obviously, fresh content still needs to be downloaded, but the Javascript, fonts, CSS, archive indexes (which list blog files), etc, often seldom or never changes. Obviously, it depends on the application, but it makes a huge difference for IMIM!

9 Likes

@Traktion you may already have addressed this at some point, so apologies if it is a repeat question: Are there any metadata vulnerabilities using sn_httpd, and if so, are you planning something to make it more secure? I’m thinking along the lines of something similar to onion (TOR) or Lokinet (I’m not sure what the differences are between those two protocols, but I am fairly sure they are safe wrt metadata).

I know at one point you had said you had only implemented it so far as http and not https, but even as https, would that rise to the level of security of Autonomi proper, or the other browsers mentioned?

I am not well-versed enough in these areas to know…

3 Likes

I’m not aware of vulnerabilities, but security hasn’t been a focus, tbh.

The core HTTP stuff is all done by https://actix.rs/, so we’re building on solid foundations there.

As we can only get data and even that is limited to what the autonomi library will return, there shouldn’t be a huge risk there either.

Obviously, running locally is low risk too, as it’s only you hitting it.

Public hosting will obviously expose that system to potential hackers though. My advice would be to use the docker container to limit potential attacks.

For HTTPS, there is no direct support yet. Actix does include it, iirc, but it hasn’t been prioritised yet. However, I’d actually suggest adding HTTPS via another service layer, which is generally good security practice anyway - hand rolled TLS has dangers. Using a HTTPS proxy/gateway should be ideal.

Note that @aatonnomicc experimented with a HTTPS proxy that @riddim had provided, but some sn_httpd assets stopped renderering. I’ve not had chance to diagnose why, but in theory this shouldn’t be an issue - most modern production web stacks terminate HTTPS at the edge of a system and talk to the underlying webservice in HTTP only.

If public hosting is a goal, I would be happy to help. Maybe others will be able to assist too, as it feels like HTTPS in front of a sn_httpd docker container would be a decent starting point here.

2 Likes

For adding https one can simply use a traefik reverse proxy (ideally one owns a domain… But a simple and free ddns domain might be enough)

Save this into a file called docker-compose.yml

version: '3.8'

services:
  sn_httpd:
    # Define the service for the SN HTTPD container
    image: traktion/sn_httpd
    container_name: sn_httpd
    ports:
      - "8081:8080" # Map port 8081 on the host to port 8080 in the container
    labels:
      # Traefik labels for routing and Let's Encrypt configuration
      - "traefik.enable=true"
      - "traefik.http.routers.sn_httpd.rule=Host(`my-domain.ddns.net`)"
      - "traefik.http.routers.sn_httpd.entrypoints=websecure"
      - "traefik.http.routers.sn_httpd.tls=true"
      - "traefik.http.routers.sn_httpd.tls.certresolver=letsEncrypt"
      - "traefik.http.services.sn_httpd.loadbalancer.server.port=8080"
    restart: unless-stopped

  traefik:
    # Define the Traefik reverse proxy service
    image: traefik:v2.10
    container_name: traefik
    command:
      - "--api.insecure=false" # Enable Traefik's API for debugging (disable in production)
      - "--providers.docker=true" # Use Docker as the provider for dynamic configurations
      - "--entrypoints.websecure.address=:443" # Define the secure HTTPS entry point
      - "--entrypoints.web.address=:80" # Define the HTTP entry point
      - "--certificatesresolvers.letsEncrypt.acme.httpchallenge=true" # Use HTTP challenge for Let's Encrypt
      - "--certificatesresolvers.letsEncrypt.acme.httpchallenge.entrypoint=web" # Use HTTP entry point for HTTP challenge
      - "--certificatesresolvers.letsEncrypt.acme.email=your-email@example.com" # Replace with your email for Let's Encrypt
      - "--certificatesresolvers.letsEncrypt.acme.storage=/letsencrypt/acme.json" # Store certificates in a file
    ports:
      - "80:80" # Expose HTTP port
      - "443:443" # Expose HTTPS port
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro" # Provide access to the Docker socket
      - "./letsencrypt:/letsencrypt" # Persist Let's Encrypt data
    restart: unless-stopped

And after that you can do docker compose up -d and have it running on your domain with https enabled

Obviously you need to use your maid address and your domain in the docker compose file… These here are just placeholders (and ofc the domain name needs to be configured to point to this server)

5 Likes

@riddim have you tried/had success with IMIM behind that proxy or do the assets (css only, iirc) go missing?

2 Likes

Hmmm - tbh didn’t get a chance to play a lot (meaning at all) recently… Very packed schedule atm…

2 Likes

I’ll see if I can have a plan today/tomorrow. The above config should be a big help!

3 Likes

I’ve released a new version, which bumps ant libs to support the latest archive format.

Available through usual sources, with docker being easiest.

Also uploaded IMIM and my blog, which needed to use the new archive format too.

Note, this was due to a breaking change in the ant libs for persisting archives (directories).

6 Likes

P.s. thanks @riddim for the tokens to upload with! :muscle:

1 Like

I’ve release another version of sn_httpd (v0.3.15).

This includes better support for XOR URLs and loosens CORS requirements. Both help to support using sn_httpd as a proxy, with a dedicated browser.

Using a dedicated browser is easier to support with regular UI frameworks, allows a common format for links and removes (all?) CORS security worries (no logins, only ant sites addressable, etc).

To use Firefox as your Autonomi web browser:

  • Go to browser settings
  • Go to Network Settings
  • Update the HTTP Proxy and Port (usually localhost and 8080 with Docker)
  • Check Proxy DNS when using SOCKS v5

Example:

Once setup, you can just browse straight to XOR addresses as http://, e.g. for current IMIM, http://15e9865d8246f2e3084f55869f2b79d8b1862f5b6d6049f9e1a2b4d74fce1a0e/

While the URLs are a little long/clumsy, you can bookmark them as normal. In the future, sharing these bookmarks could be considered ‘pet names’.

You can also get a QR code plugin to easily convert URLs into QR codes to allow others to scan. I tried ‘Offline QR Code’, which worked well and I could load the page in Firefox mobile with my proxy set too.

URL for above blog is: http://15e9865d8246f2e3084f55869f2b79d8b1862f5b6d6049f9e1a2b4d74fce1a0e/blog/72fd74ade78542395b9fc3e2be83db3a0219ab3eb35fc0a8a119fb5598a7b06b#home

Or QR code:

These changes don’t stop you accessing sites via localhost URLs. It’s just a nicer experience with a dedicated browser! :slight_smile:

9 Likes

Interesting! A different approach that I’d like to understand when I have time. We should contrast the approach you have with what I’ve come up with for dweb when it’s released. Good to see!

FYI I have dweb working but am holding release back until after the reset.

8 Likes

With HTTP + DNS proxy, it does give a lot of freedom. Whatever you put in the host in the browser, just gets forwarded to the web server being proxied to. You can then extract it and decide how it resolves. The web server itself doesn’t have to do anything special to be a web proxy either.

You can also use a common TLD instead (e.g. xor.dom.tld), then get a wildcard TLS cert for *,dom.tld, but then you lose the simple XOR addresses.

Nothing is set in stone though and I it’s easy to pivot/add different resolvers.

Right now, I’m just trying to re-use as much existing protocols, libs, apps and frameworks as possible. This probably makes it more accessible and certainly cuts down on the code needed.

I suspect once Autonomi is a raging success, browser devs will be queuing up to solve any remaining problems too!

4 Likes

Cool. I wasn’t aware of that option so came up with a different solution.

I think both approaches are compatible, so at some point I’ll look at acting as a proxy. I’m using a local DNS which has a similar effect.

Ideally sn_httpd and dweb will be able to view each others’ websites, and I think when they are both using Archive that should be the case. dweb doesn’t use Archive yet but it’s high on my to-do list now the new format has been merged (today).

5 Likes

AntTP is the new name for sn_httpd - Ant Transport Protocol.

It’s obviously a mashup of our mascot / token name and HTTP, which is pretty much what AntTP now is.

It’s pronounced Ant-Tee-Pee, so expect to see ants and tepees (or tipis) as part of the graphics/logos.

I’ve updated:

github: GitHub - traktion/AntTP: Safe Network httpd gateway server
dockerhub: https://hub.docker.com/repository/docker/traktion/anttp/general

Also I bumped a new version (0.3.16) with a few bits and bobs updated with new naming (including the binary).

7 Likes

I like the new name - much improved!

Though, it did make me think of this:

TP for my anthole!

Sorry… says more about me than the name I’m sure :laughing:

4 Likes