AntTP - Serving Autonomi data over HTTP

I am trying to build a simple GUI app, that launches browsers already installed in customized profile, so they can be configured to use proxy. So the app will have multiple proxies, including TOR proxy for clear net and yours AntTp. It will allow custom proxies too. The goal is one click solution. User will download single exe file run the app and after the launch he can pick his browser from combobox(installed browsers are autodetected), check what he want, like Tor, autonomi, + even preinstalled chrome plugins. And click launch and it will launch browser with preset http proxy that will not interfere with his already in use browsers. This will allows single click and play autonomi browser experience for everyone. It will also give him list of some starting websites, so he can launch them from that app in that browser.

I don’t think people are skilled enough to set their proxy. they are not skilled enough to isolate it from they existing browser. Chrome plugin can’t run proxy itself… So a standalone app, that start any browser he wants is one click solution…

My goal is to make that app simple, where all the hard tech staff is done, and anyone can fork it, and build its own autonomi app, and he will just need to edit html/js for GUI if he wants to customize it and some simple rust calls via tauri from that GUI.. Easy to use, easy to modify…

6 Likes

Nice - sounds like a good way to make Autonomi more accessible to the masses.

I’ll look at separating AntTP into an executable and a lib tomorrow too, to unblock you (and @safemedia, if it helps).

6 Likes

Pushed a new release v0.6.3 to improve public archive uploads. Now both new public archives can be created and existing public archives can be updated.

Release notes:

  • Adds put public archive to allow files to be added to existing public archives
  • Fixes issue with get upload status erroring
  • Adds list to record prior successful/unsuccessful uploads to retain state between requests.
  • Refactors to let insert and update public archive share the same code.

Github: Release v0.6.3 · traktion/AntTP · GitHub
Dockerhub: Dependency offline issue - will upload once it resolves itself
Cargo: cargo install anttp

I will now turn my attention to the download improvements and the library creation. I wanted to get the above done ASAP, as it’s a dependency for IMIM that I’ve been finessing in parallel.

7 Likes

Just to update on this, I tried running the /anttp/test/performance/src/localhost-large-autonomi-http.js perf test script and on my wifi with 16 threads, it seemed to be very even with the existing algorithm.

I’ll do some more tests with Ubuntu ISO sized downloads to see if there is a more pronounced difference. It may be the really large files where the thread pool utilisation makes the biggest difference.

I’ll also run some tests tomorrow when I’m wired to the network, to see if that makes much difference.

4 Likes

Hi,

I did another test today on that 1600 MB iso file. One is your original implementation, the other is with my changes.

Test Environment: Windows 11, 64GB RAM, 8 download threads for both tests.

Original, no changes:

curl --header "Cache-Control: no-cache" http://localhost:8080/e7bb1b87c1f0e07cdb76ba5e82a425a8da712940c2d3553aa6791494e92aa54d/ubuntu-16.04.6-desktop-i386.iso -o to-autonomi.mp4
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1600M    0 1600M    0     0  1327k      0 --:--:--  0:20:34 --:--:-- 3210k

Changed implementation:

curl --header "Cache-Control: no-cache" http://localhost:8080/e7bb1b87c1f0e07cdb76ba5e82a425a8da712940c2d3553aa6791494e92aa54d/ubuntu-16.04.6-desktop-i386.iso -o to-autonomi.mp4
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1600M    0 1600M    0     0  2129k      0 --:--:--  0:12:49 --:--:--  460k

As you can see, it dropped form 20:34 minutes, to 12:49 minutes.

When I do the test on smaller files, there are no measurable differences. Simply because 1 thread downloads 4MB chunk, so there is minimum download thread reusing.

If you are not able to measure any difference, it means the improvement I measure is OS specific. You are testing on Linux, I am testing on Windows.

I have played with your antTp, in such a way, that I configured browser to use my implementation of socks5 server, which than connects to your antTp, if domain match some pattern(my implementation of domains for autonomi), otherwise it routes the traffic to clearnet. So the goal is, to have a browser that supports both, browsing and autonomi at once. Unfortunately I see significant delays when the traffic is routed via socks5 to antTp. No delays If it routes it to clear net. It works much faster if the browser is directly configured as http_proxy. After investigation it looks like it depends on many various Window specific low level nuances. like Buffer flush logic, etc. So based on this experience, I have a suspicion that antTp is experiencing similar issues on Windows. It works well on Linux, but on Windows there are some bottlenecks… My antTp code refactor does not solve the windows issues, it just let the delays to happen in parallel with working downloads.

4 Likes

The tests I ran last night were a mixture of files between about 20 and 100 MB, IIRC, so they may fall into the category of the smaller files, i.e. they don’t exercise the thread pool enough.

I’ll try today with the same curl commands above on the big ISO and see how that goes. I’m confident your changes should improve Linux too. There were no performance regressions with the smaller files I could see, at least.

Interesting! I wasn’t really sure how much Windows and Linux would differ and sort of hoped they would be similar. It sounds like there are certainly some nuances to consider.

Btw, I did have a prototype AntTP which included a proxy module for forwarding to clear net. However, I had 2 realisations: 1) I may be playing whack-o-mole with issues that I didn’t want to have to worry about yet, 2) security concerns.

When all traffic goes to Autonomi, a whole gamut of security issues go away with it. We can do things like disabling CORS (or rather, allowing anything), as the host is always the same - the proxy - which can be controlled.

Ofc, you’re welcome to go with a hybrid approach and having it as an external app to compliment AntTP seems like the way to go for that. I just wanted to share my security anxieties with you! :slight_smile:

1 Like

I already have a protoype GUI, that can set up your AntTp as http_proxy and let clearned traffic die. User can also pick to run my socks5, in that case browser is preset with socks5 proxy, and socks5 can be configured to use AntTp + clearnet or AntTp + Tor. This all works already for Chrome, Firefox, Edge,Brave. If domain has a format of *.ant, it is routed to AntTp, the rest is routed to internet. The only problem is a speed of AntTp behind socks5. Speed is fine, if AntTp is set directly. Unfortunately I will have not a time for next 2 weeks to look at it.

Edit: I would like to clarify, that the main reason I want socks5 is, that I do not agree with decision that arbirum wallet manipulations are on clearnet. I think everything using public API servers should be hidden behind TOR. So the goal is to run a browser with socks5 that does Autonomi + TOR, so any call on it is hidden, all the browser metadata tracking, are hidden.

Browser is tracking a lot. And we will have hybrid solutions, we will have plugins in those browsers that will use public servers. All the crypto wallets are using public RPC servers…
This is also a reason, why I created tor router, that can be used to wrap any public URL via localhost running tor. Which can be configured for autonomi client api/client libs to hide the IP for Wallet operations from operator running Arbitrum API server.

2 Likes

I tried the same tests on a wired connect with 16 download threads on Linux and I’m seeing similar things. So, it’s definitely faster with larger files, where the threads can really fill up:

New streamer:

$ curl --header "Cache-Control: no-cache" http://localhost:8080/e7bb1b87c1f0e07cdb76ba5e82a425a8da712940c2d3553aa6791494e92aa54d/ubuntu-16.04.6-desktop-i386.iso -o /tmp/ubuntu-16.04.6-desktop-i386.iso
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1600M    0 1600M    0     0  5142k      0 --:--:--  0:05:18 --:--:-- 8216k

Existing streaming:

$ curl --header "Cache-Control: no-cache" http://localhost:8080/e7bb1b87c1f0e07cdb76ba5e82a425a8da712940c2d3553aa6791494e92aa54d/ubuntu-16.04.6-desktop-i386.iso -o /tmp/ubuntu-16.04.6-desktop-i386-with-channels.iso
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1600M    0 1600M    0     0  2323k      0 --:--:--  0:11:45 --:--:-- 4925k

So, twice the speed and I suspect as the number of threads increases, the bigger the impact will be (as there will be more stalled).

Great job! I’ll integrate this code today.

4 Likes

I’ve just done some more testing, including some video playing and I noticed an import/curious difference.

When I stream a video trough VLC with the existing streamer, it starts the video, fills a buffer, then provides back pressure to AntTP. Watching system monitor, the bandwidth used plummets to the odd spike. So, it looks like it is just providing data as needed.

However, with the new streamer, the bandwidth remains pinned, then after a minute or so, it errors out. I suspect this may be because the client can’t cope with the quantity of data.

2025-06-13T10:24:46Z INFO  autonomi::client::data_types::chunk] Getting chunk: f43765ae5ec33b1e8d20de3c1eab993a12ba8f9d7d216da44f679d465322383c
[2025-06-13T10:24:46Z INFO  anttp::service::chunk_stream] Start chunk download 15 off=0 len=4194304
[2025-06-13T10:24:46Z INFO  autonomi::client::data_types::chunk] Getting chunk: 97496ecfb94ff8e5c19c28edce0a58bf3176cfd247462b0f77f16e99e0d72b62
[2025-06-13T10:24:46Z INFO  anttp::service::chunk_stream] Start chunk download 16 off=0 len=4194304
[2025-06-13T10:24:46Z INFO  autonomi::client::data_types::chunk] Getting chunk: 4299a926e9323e0579180ea434002a921af5b80e09d6163f285e91e0bfbb4658
[2025-06-13T10:24:48Z ERROR autonomi::client::data_types::chunk] Error fetching chunk: InternalMsgChannelDropped
[2025-06-13T10:24:48Z ERROR anttp::service::chunk_stream] chunk_get error for idx 8: Network(InternalMsgChannelDropped)
[2025-06-13T10:24:48Z ERROR actix_http::h1::dispatcher] Response payload stream error: "decrypt"

I tried dropping from 16 to 8 threads, but the same thing happens, maybe after a longer period of time though.

It also seem to take 2-4x the time to start the video. VLC does tend to grab chunks at the beginning and the end before starting the stream. There is also a large delay when skipping to another point in the video. Likewise, sometimes it seems to get stuck on a frame, despite the chunk downloads happening

So, it looks like there is a difference between the way the two routines respond to back pressure.

It looks like the existing implementation pivots from one request to another with more agility, i.e. it stops and restarts more quickly and is keener to back off when the client doesn’t want more data.

So, new implementation is great for big files - about 2x as fast with 16 threads at least - but for streaming where the client doesn’t want all the data as max speed, there seem to be some issues.

My first thoughts are that it relates to the implementation of the future::stream. The existing streamer implements the poller, which then returns some data. If the connection slows/drops, the poller just isn’t called and data fetching ceases. However, it doesn’t look like this was happening with the new streamer.

Maybe a hybrid of the two would give the best of both worlds? Until then, I’ll stick with the existing one for stability, but this is definitely a big area of performance that could be tapped, if we can refine it.

4 Likes

You are right. It crashes. I looked at it, did some tests and I think I know what need to be done.

  1. I think there is a bug that I introduced in calculating chunks for streaming
  2. I need to kill chunk downloads once the client stop receiving them
  3. I need to introduce shared Thread pool for whole server, streaming is triggering many requests and that overhelm autonomi client, since now it is N threads per request. Autonomi client is likely rejecting so many parallel downloads.

And few other ideas. It is not a simple fix. I will look at it over weekend, I think I can handle it.

4 Likes

I think if we do do this, we’ve got to be clever about how we queue/interleave the requests, or we could end up with one big download blocking the whole pipe for a while.

It wouldn’t be good to block other elements loading on a page, while a video is loaded, for example. It would be even worse in a multi-user environment, ofc.

I’m sure you’ve given that some thought, but just to highlight it. On a whole, I agree that a server wide limit is ideal though, we just need to be smart about how it is implemented.

I want to try shared pool such, that let say I start downloading 1 GB file, with limit 8 threads per server. it allocates 8 threads, if than new request come, the thread that finished will pick next in a row. so basically circling between them… and serving them all in paralell.

There could be additional logic, like streaming limited to 1 -2 chunks in paralell, since there is lot of dropping. For example firefox start streaming with requesting whole file, than it receives first bytes and drop it, and instantly asks end of the file, since there is important metadata about file… maybe it is because I am testing it on video that does not ready for streaming… But still streaming create tons of junk traffic.

1 Like

shout out to AntTp 60s from downloading the exe to streaming a movie in windows

I read that dawn of the dead is open source so ill upload that tomorrow so people can have a play with movie streaming.

----edit

I meant “night of the living dead”

7 Likes

That’s fantastic!

Would you be willing add a short step-by-step guide like you did for dweb to help others give it a shot?

5 Likes

that’s the plan ill make a 1.2.3 same as for dweb to help get people up and running.

its been a surprise logging into windows and finding that things are pretty easy with AntTp and Dweb. maybe ill spend more time over here in windows.

6 Likes

Looking forward to trying out your instructions in due course.

Streaming video can be a fantastic way of giving people a taste of Autonomi & getting the word out there, so I’m looking forward to seeing how it works just now :slight_smile:

4 Likes

http://127.0.0.1:8080/385247a88c16db8277e649be549562cf17cbfd1e6fa36c6bddd42f277c527383/Reefer_madness1938.webm

sorry @aatonnomicc for being a copycat here - but the idea of uploading public domain classic movies is brilliant :smiley: here we go - 1.4GB cinematic masterpiece that’s also deeply educational :innocent: xD

ps: uploaded with the upcoming new light client with gas-limit+retry on errors via wifi at my parents

7 Likes

is there any way to open such video in my Chrome without installing anything? :slightly_smiling_face: Or what is the most totally easiest way for my grandgrandmother to watch this movie?

2 Likes

I guess starting your own instance of the AntTP docker container :smiley: for the grandma edge case I would recommend using the publicly available instance at:

https://va.worldwidenodes.cyou/385247a88c16db8277e649be549562cf17cbfd1e6fa36c6bddd42f277c527383/Reefer_madness1938.webm

hosted by our very generous community member @wydileie

2 Likes

wow, so even if I don’t know anything about Docker containers, other people can make them so they can serve us Autonomi content directly in the browser?? Seems like we need more of these containers and voila, another YouTube is here??

3 Likes