HTTP/2, proxy, FFI - Future of safe browser interface

@Viv, @dirvine primarily.

What are the long term plans for interfacing between the browser and the safe net?

Obviously, we currently have the proxy, which is HTTP/1.x based. Considering all requests will go through the same proxy/launcher, HTTP/2 would be a big win, as it could all be multiplexes through one connection.

However, I wonder what the long term goal is here. With all the talk of schemes, temporary .safenet URLs and so forth, what is the vision beyond this? Should the browser be accessing the files directly via FFI, hitting the launcher’s REST API, etc?

I suppose I am asking whether safe:// should actually be HTTP/2 based or whether it should be something completely native. Any thoughts?

11 Likes

Nice question.

Personally I hope we will be able to support HTTP/2 from Launcher for its REST API considering the benefits(multiplexing and compression) it might provide even outside the context of the other side not being a browser. While it doesn’t seem to be a mandatory requirement to HTTP/2, dont know if/how the general requirement of TLS impacts this. Just my opinion though. I’m probably missing something out here. @Krishna_Kumar probably has more info on how we could go about supporting the same.

In terms of the proxy, I’m really just waiting for the SAFE Browser to get to its initial stable version so we can hopefully get rid of the proxy from the launcher. I’m probably just biased in that view :slight_smile: as it keeps the launcher not coupled to something that which not all users might even want.

I’d also hope Launcher with the API it exposes to apps can cater to the general requirements and doesn’t force apps such as the browser needing to interface with the FFI layer directly. Whether its called Launcher/Gateway/… its main purpose was to help apps interface with the network easily and give users control over the applications and how they represent the user in the network. Wouldn’t think of it doing its job properly if its needing apps to go around it just to function adequately.

8 Likes

Since the API is RESTful, just like an Internet app, then apps should look like the familiar Internet apps, same addressing scheme as much as possible. Use the API as the dividing line that hides the machinery of the network from the user. On the “userland” side it should be familiar Internet protocols. That makes the dissemination of this technology much easier by leveraging what people are already familiar with - they already have a vocabulary of conventions and uses.

Are there any security concerns in the padding of http/2 with respect to the safenetwork? This is a conclusion from httpwatch.com “HTTP/2 is likely to provide significant performance advantages compared to raw HTTPS and even SPDY. However the use of padding in response messages is an area of potential concern where there could be a trade-off between performance and security.”
A Simple Performance Comparison of HTTPS, SPDY and HTTP/2 | HttpWatch BlogHttpWatch Blog

As always: it’s complicated.

Yes, the standard does not require H2 to use TSL, however all existing (browser) implementations do. As a result in practice, if you want to use H2 from any browser, it must be through TLS – even for a local server – otherwise all browser will fall back to HTTP 1.1. I am pretty sure that is also true for for http requests the Javascript is doing. Taking into consideration that we just removed unneded encrypting of the connection, I am not sure that adding a complete TLS-auth-cycle (including the problems of having to have a CA on it and us not reliably being able to ship a server with an embedded cert…), I am not sure this is really worth it.

Especially considering that we are talking about ultra-close connections – on the same machine via loopback or at least in the local network. Remember that a main benefit why websites load faster with H2 is the push-part of the protocol, which isn’t applicable in our case. While the multiplexing is nice, the overhead of having a few connections open with pipelining isn’t actually that bad. Considering End-To-End I am pretty sure that the xor routing and the speed of the safe network itself is going to be much more important than doing multiplexing over http1.1-pipelining would offer in benefit for the last loopback device.

7 Likes

A bit out of subject but is that the reason why when I make multiple POST request at the same time only the last one works and the other one return an error?

I am not sure why that would be, but I think it is something else! :slight_smile:

1 Like

I think I must be missing something here. Without the proxy, how do you request files from remote URLs? Are you saying that the proxy should go, but the HTTP server should remain, to request data?

In short, without the proxy, how does data get from the network to the browser and will this protocol be documented and accessible to other apps in addition to the REST API? I assume there is some sort of web server listening locally?

Since the Launcher API is running locally on 8100 a dedicated browser could easily do the rewrites necessary to request the from there using the DNS endpoints.

Conceptually I’m pretty certain the proxy does exactly that, it takes “.safenet” requests and simply reformat them to fit the Launcher API DNS endpoint and tunnels the result back with some extra CSP headers.

So proxy-less websites can be created today, the addressing would just not look as intuitive.
i.e. given a link to: http://drunkcod.safenet/index.html you could equally well today write that as: http://localhost:8100/dns/www/drunkcod/index.html

(if you squint hard enough http://localhost:8100/dns/ looks just like safe:// ;))

4 Likes

Almost. The API (unfortunately) requires some URI encoding at the moment, the URL you need to ‘query’ therefore would be: http://localhost:8100/dns/www%2Fdrunkcod%2Findex.html, which doesn’t map super-nicely. But more over, this only tells you the place this file can be found and you still need to fetch it afterwords. Resolution currently needs two requests at least – which is something the proxy abstracts away for you into one request.

1 Like

According to the spec, and my testing the “index.html” part is the only thing that requires encoding, that’s why I slyly picked one without subdirectories :wink:

If you want to verify this… click on the link.

I think you might have had the service information endpoint in mind /dns/:serviceName/:longName that one would give you a directory listing and you’d need to go on from there to get an actual file.

So for direct linkage of files it works very well. What it doesn’t provide that the proxy does give is the concept of a default document (hence why I linked to index.html and not so only to :servicename.:longname)

1 Like

Sorry, you are right, I stand correct: The servicename and longname aren’t decoded but only the path is – see docs. However, your link works because it doesn’t have any subdirectories in it, that would require a URI-Encoding – and thus this works with all top-level files. But if you wanted to fetch the file img/logo.png that would make it end up at the url:

http://localhost:8100/dns/www/drunkcod/img%2Flogo.png

Which doesn’t map nicely.

Another problem is that this concept also break absolute-linking. Because everything that now links to / would break as subdomain.domain are part of the path. That is very inconvenient for absolute-path linking.

That is exactly what I was after! I didn’t realise you could get files from services registered by others through the REST API.

Dots connected! Thanks!

2 Likes

It seems @Traktion got his answer, but this thread got me looking into things a little further. Here is a site I have up with more complexity.

it appears two files are being fetched from clearnet? So when Beaker (As you can see being used here) goes no proxy, will my site page break? I guess I will have to look at my html code and see where to make the change.

So, essentially the SAFE scheme is the HTTP (1.X) scheme over localhost on port 8100, combined with a reverse proxy to parse/route the request?

That is enlightening. So, http2 could be beneficial to allow multiplexing of many requests via a single connection, but encryption could be an issue: HTTP/2 Frequently Asked Questions

"Does HTTP/2 require encryption?

No. After extensive discussion, the Working Group did not have consensus to require the use of encryption (e.g., TLS) for the new protocol.

However, some implementations have stated that they will only support HTTP/2 when it is used over an encrypted connection, and currently no browser supports HTTP/2 unencrypted."

However, it does make me ponder whether SAFE is genuinely a new scheme or just HTTP with different name resolution and routing. Hmm…

Edit: To answer my own question, I think it does. The HTTP scheme does not have exclusive use of the HTTP protocol - it just happens to have the same name. The SAFE scheme likewise can use the HTTP protocol, but can use a different mechanism for resolving/routing the requests.

@Traktion I think your “isn’t this just http” line of reasoning is spot on. And it’s why I’m not really convinced that safe:// in its current state is such a glorious thing. Because, it’s still http, and currently content creators and users expects it to behave just like http.

This is obvious from all the safe + clearnet mixing that goes on and unless the browser also explicitly blocks other schemes (i.e. http) or warns before proceeding all the tracking and privacy concerns are all there.

Anyhow, that leads to your second point and “would http2” help. To be honest, under the current scheme it’s actually rather moot. Connection multiplexing makes a lot of sense if you’re talking to a remote machine, if you’re talking to yourself over loop back, not so much.

One would need to measure the difference but I’m fairly certain it would be negligible since we’re operating in a zero latency environment and never going very deep into the network stack.

1 Like

I think this is the exact thing hoped to be done with the initial SAFE Beaker Brower @joshuef is going to develop.

Here is another site of equal js/css complexity, just without the email form on the page that is trying to evoke php/dynamic form data.

I think this site proves it is perfectly okay to build a site that doesn’t need clearnet data:

Maybe it is no bad thing though? There are a lot of tools, libraries and technologies which have blossomed around it after all. Maybe just fixing capabilities to a useful/secure subset is all that is needed?

You are probably right. I suppose I was thinking about the extra effort both the client and server have to go to to manage the multiple connections, but perhaps it would be lost in the noise - even with multiplexing, we still need to manage the data streams after all.

I’m not disputing the viability of building safe only sites.
I’m simply pointing out that currently most people seem to assume interop, and for a great many things it would actually be a very convenient thing to have. CDN’s exists for a reason and whilst one might dream of a SAFEer future where common libraries are linked via secure hashes and served from rapid nodes close to each one of us.

That future might be a bit down the road :slight_smile:

I’m very much looking forward to when additional features start coming online but as for now we do have a useful thing, but for most of us, only as a content delivery and storage solution.

Since the Launcher can freely multiplex requests towards the network the only potential benefit would be between the browser and launcher.
And in that scenario I¨m not even sure multiplexing would be a definite win over just having a horde of concurrent connections.

Sipmly put when it comes to latency and performance work it’s probably much better to invest in the Launcher → Network bits over the Browser → Launcher bits, due to Browser + Launcher both being local, they essentially just pass bits of memory to each other, that’s bazillions of potential requests per second.

1 Like