What are the long term plans for interfacing between the browser and the safe net?
Obviously, we currently have the proxy, which is HTTP/1.x based. Considering all requests will go through the same proxy/launcher, HTTP/2 would be a big win, as it could all be multiplexes through one connection.
However, I wonder what the long term goal is here. With all the talk of schemes, temporary .safenet URLs and so forth, what is the vision beyond this? Should the browser be accessing the files directly via FFI, hitting the launcherâs REST API, etc?
I suppose I am asking whether safe:// should actually be HTTP/2 based or whether it should be something completely native. Any thoughts?
Personally I hope we will be able to support HTTP/2 from Launcher for its REST API considering the benefits(multiplexing and compression) it might provide even outside the context of the other side not being a browser. While it doesnât seem to be a mandatory requirement to HTTP/2, dont know if/how the general requirement of TLS impacts this. Just my opinion though. Iâm probably missing something out here. @Krishna_Kumar probably has more info on how we could go about supporting the same.
In terms of the proxy, Iâm really just waiting for the SAFE Browser to get to its initial stable version so we can hopefully get rid of the proxy from the launcher. Iâm probably just biased in that view as it keeps the launcher not coupled to something that which not all users might even want.
Iâd also hope Launcher with the API it exposes to apps can cater to the general requirements and doesnât force apps such as the browser needing to interface with the FFI layer directly. Whether its called Launcher/Gateway/⌠its main purpose was to help apps interface with the network easily and give users control over the applications and how they represent the user in the network. Wouldnât think of it doing its job properly if its needing apps to go around it just to function adequately.
Since the API is RESTful, just like an Internet app, then apps should look like the familiar Internet apps, same addressing scheme as much as possible. Use the API as the dividing line that hides the machinery of the network from the user. On the âuserlandâ side it should be familiar Internet protocols. That makes the dissemination of this technology much easier by leveraging what people are already familiar with - they already have a vocabulary of conventions and uses.
Are there any security concerns in the padding of http/2 with respect to the safenetwork? This is a conclusion from httpwatch.com âHTTP/2 is likely to provide significant performance advantages compared to raw HTTPS and even SPDY. However the use of padding in response messages is an area of potential concern where there could be a trade-off between performance and security.â A Simple Performance Comparison of HTTPS, SPDY and HTTP/2 | HttpWatch BlogHttpWatch Blog
Yes, the standard does not require H2 to use TSL, however all existing (browser) implementations do. As a result in practice, if you want to use H2 from any browser, it must be through TLS â even for a local server â otherwise all browser will fall back to HTTP 1.1. I am pretty sure that is also true for for http requests the Javascript is doing. Taking into consideration that we just removed unneded encrypting of the connection, I am not sure that adding a complete TLS-auth-cycle (including the problems of having to have a CA on it and us not reliably being able to ship a server with an embedded certâŚ), I am not sure this is really worth it.
Especially considering that we are talking about ultra-close connections â on the same machine via loopback or at least in the local network. Remember that a main benefit why websites load faster with H2 is the push-part of the protocol, which isnât applicable in our case. While the multiplexing is nice, the overhead of having a few connections open with pipelining isnât actually that bad. Considering End-To-End I am pretty sure that the xor routing and the speed of the safe network itself is going to be much more important than doing multiplexing over http1.1-pipelining would offer in benefit for the last loopback device.
A bit out of subject but is that the reason why when I make multiple POST request at the same time only the last one works and the other one return an error?
I think I must be missing something here. Without the proxy, how do you request files from remote URLs? Are you saying that the proxy should go, but the HTTP server should remain, to request data?
In short, without the proxy, how does data get from the network to the browser and will this protocol be documented and accessible to other apps in addition to the REST API? I assume there is some sort of web server listening locally?
Since the Launcher API is running locally on 8100 a dedicated browser could easily do the rewrites necessary to request the from there using the DNS endpoints.
Conceptually Iâm pretty certain the proxy does exactly that, it takes â.safenetâ requests and simply reformat them to fit the Launcher API DNS endpoint and tunnels the result back with some extra CSP headers.
Almost. The API (unfortunately) requires some URI encoding at the moment, the URL you need to âqueryâ therefore would be: http://localhost:8100/dns/www%2Fdrunkcod%2Findex.html, which doesnât map super-nicely. But more over, this only tells you the place this file can be found and you still need to fetch it afterwords. Resolution currently needs two requests at least â which is something the proxy abstracts away for you into one request.
According to the spec, and my testing the âindex.htmlâ part is the only thing that requires encoding, thatâs why I slyly picked one without subdirectories
If you want to verify this⌠click on the link.
I think you might have had the service information endpoint in mind /dns/:serviceName/:longName that one would give you a directory listing and youâd need to go on from there to get an actual file.
So for direct linkage of files it works very well. What it doesnât provide that the proxy does give is the concept of a default document (hence why I linked to index.html and not so only to :servicename.:longname)
Sorry, you are right, I stand correct: The servicename and longname arenât decoded but only the path is â see docs. However, your link works because it doesnât have any subdirectories in it, that would require a URI-Encoding â and thus this works with all top-level files. But if you wanted to fetch the file img/logo.png that would make it end up at the url:
Another problem is that this concept also break absolute-linking. Because everything that now links to / would break as subdomain.domain are part of the path. That is very inconvenient for absolute-path linking.
it appears two files are being fetched from clearnet? So when Beaker (As you can see being used here) goes no proxy, will my site page break? I guess I will have to look at my html code and see where to make the change.
So, essentially the SAFE scheme is the HTTP (1.X) scheme over localhost on port 8100, combined with a reverse proxy to parse/route the request?
That is enlightening. So, http2 could be beneficial to allow multiplexing of many requests via a single connection, but encryption could be an issue: HTTP/2 Frequently Asked Questions
"Does HTTP/2 require encryption?
No. After extensive discussion, the Working Group did not have consensus to require the use of encryption (e.g., TLS) for the new protocol.
However, some implementations have stated that they will only support HTTP/2 when it is used over an encrypted connection, and currently no browser supports HTTP/2 unencrypted."
However, it does make me ponder whether SAFE is genuinely a new scheme or just HTTP with different name resolution and routing. HmmâŚ
Edit: To answer my own question, I think it does. The HTTP scheme does not have exclusive use of the HTTP protocol - it just happens to have the same name. The SAFE scheme likewise can use the HTTP protocol, but can use a different mechanism for resolving/routing the requests.
@Traktion I think your âisnât this just httpâ line of reasoning is spot on. And itâs why Iâm not really convinced that safe:// in its current state is such a glorious thing. Because, itâs still http, and currently content creators and users expects it to behave just like http.
This is obvious from all the safe + clearnet mixing that goes on and unless the browser also explicitly blocks other schemes (i.e. http) or warns before proceeding all the tracking and privacy concerns are all there.
Anyhow, that leads to your second point and âwould http2â help. To be honest, under the current scheme itâs actually rather moot. Connection multiplexing makes a lot of sense if youâre talking to a remote machine, if youâre talking to yourself over loop back, not so much.
One would need to measure the difference but Iâm fairly certain it would be negligible since weâre operating in a zero latency environment and never going very deep into the network stack.
Maybe it is no bad thing though? There are a lot of tools, libraries and technologies which have blossomed around it after all. Maybe just fixing capabilities to a useful/secure subset is all that is needed?
You are probably right. I suppose I was thinking about the extra effort both the client and server have to go to to manage the multiple connections, but perhaps it would be lost in the noise - even with multiplexing, we still need to manage the data streams after all.
Iâm not disputing the viability of building safe only sites.
Iâm simply pointing out that currently most people seem to assume interop, and for a great many things it would actually be a very convenient thing to have. CDNâs exists for a reason and whilst one might dream of a SAFEer future where common libraries are linked via secure hashes and served from rapid nodes close to each one of us.
That future might be a bit down the road
Iâm very much looking forward to when additional features start coming online but as for now we do have a useful thing, but for most of us, only as a content delivery and storage solution.
Since the Launcher can freely multiplex requests towards the network the only potential benefit would be between the browser and launcher.
And in that scenario I¨m not even sure multiplexing would be a definite win over just having a horde of concurrent connections.
Sipmly put when it comes to latency and performance work itâs probably much better to invest in the Launcher â Network bits over the Browser â Launcher bits, due to Browser + Launcher both being local, they essentially just pass bits of memory to each other, thatâs bazillions of potential requests per second.