Safe Launcher Security

Another solution that could alleviate @cretz’s worries fairly quickly is to integrate the already-existing nativefier tool with the SAFE launcher. The primary modification it would need it to block anything that’s not a safenet link. After downloading the initial electron package for my OS (this can be bundled with the SAFE launcher), it only took 5-6 seconds to turn a web app into a sandboxed Electron application. We could have a URL address bar in the launcher that would automatically create the binary (would only be done once), and then launch it.

One could create a standalone application separate from the launcher that makes it easier for people to launch these webapps, but it makes sense that the launcher should be in charge of launching these webapps, so it’d be a good fit.

One downside to this approach is the size of storing the binaries for every webapp. But you’d have this issue with any natively-built applications anyway (with Electron, at least.)

3 Likes

I tried to wire up a sample that seems to work for now. I thought of sharing the same with you guys to know whether it helps.

Plan to tighten the screws for the issues highlighted,

  1. Stringent validation for CORS based on Origin header. Allow XHR requests only if the origin ends with .safenet.
  2. Update proxy to handle only .safenet http requests, other requests will be forbidden. This doesn’t make a significant difference because the PAC file redirects only for .safenet requests. But just incase if the rule is skipped, then the proxy filters the requests.
  3. The PAC file remains the same
  4. Encourage the usage of uBlockOrigin addon as suggested by @lightyear in this post

The problem here is that we need to configure the proxy and the addon manually (More configuration).

If we can create a simple addon which will configure the proxy and also add the filtering rules as needed it would make it easier for the user.

Thanks @lightyear for sharing the configurations.

Please share your thoughts on this approach.

8 Likes

If I understand correctly, this means a browser extension wont work when used on a public web address, is that correct?

My understanding of web security isn’t as deep as you guys so could you give a concrete example of what type of attack this will prevent?

I understand that with no2 a safenet website won’t be allowed to embed URL to the public web. I understand why this would be bad. But why is no1 necessary?

Honestly just trying to understand it.

1 Like

This isn’t enough for me personally. I do not ever ever want to use my normal browser or encourage others to. I do not want a web proxy (which I believe encourages normal browser use) and at least definitely don’t want it on by default and still without CSP headers. I do not want browsers accessing the API (which continues to encourage normal browser use) and at least definitely don’t want it on by default.

Sorry, my opinion is that encouraging normal browser use is terrible for a project that prides itself on anonymity. But I’ve made this point a lot now, so I’ll quit beating the dead horse. Do what y’all think is best.

4 Likes

@cretz keep beating! This horse isn’t dead, it’s just owned by a crowd of folk with different levels of understanding (as well as different goals). For myself at least further discussion and explanation of the specific risks (even though you’ve already done this) is helping my understanding. I still don’t understand the issues well enough so hope you will keep beating.

2 Likes

CSP conditions can be added in the html using the meta tags isn’t it? Or do you mean to enforce and send a set of CSP headers from the launcher for each request in case even if the dev doesn’t add it (More like a best practice enforcement)?

Don’t get me wrong please, I am just trying to correct my understanding on this.

For every request anyways. We can’t be asking the devs to do something if we can. Same with asking the users to do something (e.g. updating ublock). Also there is an obligation to protect users from bad devs where reasonable…by asking the devs to not mix content but add their own headers, it’s like asking the devs to not write huge files but you won’t ask for app approval by a user.

But it’s all moot to me. Browser-based use of safe I completely disagree with and I shouldn’t even be discussing compromises to my own principals.

1 Like

Imo, removing browser based safe apps would castrate safe net, sending it off on a path to obscurity. Love it or loath it, web apps are extremely popular, rapid to develop and simple to deploy over a multitude of platforms. Cutting off this avenue seems like throwing out the baby with the bath water to me.

Having a baseline of reasonable security and privacy far higher than the clear net will be a boon to a great many users. For those who want to go a level further, other options will remain - they can create dedicated, stripped out, browsers or standalone apps.

Ofc, many may want a standalone app for dealing with their Safecoins (or other data sensitive to themselves), but that option will always remain. For me, it is critical that these high risk apps can remain clearly separated from others, but there would seem to be many ways this could be achieved (2 factor auth, multisig, etc).

3 Likes

These seem like good improvements. However, I would still suggest that web browser api access should have a toggle to disable. Some will simply not be happy exposing this under any circumstances.

2 Likes

Just to point out an opposing issue here.

My understanding is that people will have web sites on the clear web with content from SAFE also on the web page. There would be a “install SAFE” button as well.

This would be for non-users of SAFE to use SAFE and ease the introduction of SAFE to the wider public. Also people may want to have web sites on the clear web with a lot of their content stored on SAFE. Maybe they have a backend they don’t wish to convert to a SAFE APP (yet)

OR is this simply stopping SAFE sites having content on the clearweb. But then what if I want to link to the clear web for some content there that I don’t wish to store on SAFE (either for legal reasons or otherwise)

1 Like

This sounds good to me. I suppose nothing prevents us to build both approaches, and natural selection will do the rest. There should be a clear warning of the risks one is exposed to when using a regular browser, though. I tried to do that on the disclaimer I put when one opens ‘safeshare’. Now, I know we never read disclaimers…

Maybe it could be interesting to try to analyse and summarise what the risks are when you use a regular browser, as like @happybeing says, we all have different levels of understanding of that point. I will try as soon as I find time.

2 Likes

You can still link to anything. The idea is to prevent the mixing of content on the same page, as that means the website (and with that your safenet) would be vulnerable to the privacy problems of the clear net. Which in turns means we can’t guarantee the safety anymore. So you wouldn’t be able to include clearnet content on safenet sites, you can still link.

However, you are poking towards another thread-vector we should be aware of: clearnet including safenet. In the current system, you could still add an iframe of *.safenet-domain within your clearnet website and through that communicate with the safenet API. Now I have already pointed out that this would not leak any information itself until the User has given permission but I don’t think there is any valuable point to take a risk for that either, thus I am proposing:

@Krishna_Kumar can we change the proxy (in the launcher) and the API endpoint to add a X-Frame-Options: SAMEORIGIN-Header to all responses? Yes, on both. This would prevent this kind of attack.


As I pointed out before those risks aren’t any higher with any Web-App (with our new safeguards in place) than they are with a locally installed app. Adding such a message but not also a This app might send any of your private data anywhere it likes-Warning on all locally run apps doesn’t make any sense, as I pointed out many times: you are much more vulnerable with locally run apps (which can also read all the apps you’ve installed, the entire content of your home directory, are probably able to really identify you with those and can send it to who-knows-whom) than any sandboxed web-app. Just because some people are scared of webapps and keep claiming those are more hazardous doesn’t make it so!

Gimme solid evidence (I can’t debunk in seconds) and I am happy to change my mind but until then: adding warning signs doesn’t make anything more secure – it just scares people to not use it (which is what you often want in reality though). Fixing actual (not perceived) security problems however does. Let’s work on that instead.


If anyone wants to start discussing how we can actually address the security and safety of locally run apps (which would include a dedicated browser), please feel free to take any of my examples and start a topic about that. I think it is a discussion we need to have sooner or later.

5 Likes

Yes. Am on a hunt for dependencies which can help in catering these better. The helmet node dependency seems to cater for few of the standards. I am just starting to explore this module. Please share your inputs on the same or any alternate that you would suggest based on your previous experience. Am also looking to compare similar few modules before I spend time on the helmet dependency.

2 Likes

I suppose you are looking at that primarily for CSP Module, right? Because the frameguard would really just be a one-line Header addition that isn’t really worth another dependency and many other features don’t really concern us in the current setup (public-key pinning/https etc…). I’m not sure if I’d add the entire helmet-suite but rather only those two modules directly.

I don’t have much experience with the express ecosystem, nor any specific modules in it (well, three years ago doesn’t count), but I’d really suggest to look into adding a Rate-Limiter so the launcher doesn’t fall trap to malicious apps DoSing it. express-limiter seems like a good option (but I am looking from the outside here – take with twenty grains of salt!). Strike that, it requires a Redis-Server. express-rate-limiter doesn’t, but also offers way fewer features – it only rate limits by remote-ip, which is totally useless for our case. Hmm… one with that feature to allow rate-limits per API-Key (which makes sense) and/or User-Agent (for the Proxy) but without dependency on redis seems to be … rare…

3 Likes

Thanks for all the inputs @lightyear.

3 Likes

I think my position comes from the idea that the code of an autonomous application can, and must be entirely reviewed.I assume the code is open. The author knows exactly what they put inside, and the community can check what happens under the hood. People can get the source, check integrity by a checksum, be sure of what they run and have solid confidence.
In the case of an application run inside a browser, things are quite different, as even if the applcation source is well known and reviewed, the code for the browser is so huge and complex, that I have serious doubt that anyone still has an idea of what happens behind the curtains. The browser has a variety of ways to leak things, to make unwanted connections, to store tracks and logs of every kind, scripts can access stored evidence afterwards, send history or cached content where they like even long after the web app is closed. The code evolves so fast that even the most trained and competent expert does’nt really knows what really happens inside. Specially in the latest generation of developpers, who don’t seem to even get what privacy can mean.
I would love to be pointed to some firefox or chrome documentation or analysis that shows where and how many the weaknesses are, and that would give me evidence that once you duct tape , say 3 holes in firefox, there are not other holes to be taped. How can you trust such a pile of self updating scripts and plugins , when Chrome for instance is written by the world champion of data harvesting ? Someone tells me that we have a precise and exhaustive knowledge of the behaviour of this browser, and I may think differently.

My other concern comes from the fact that almost everyone use their browser without any sort of safeguards. You said you use a dedicated profile, ublock origin, and a VPN tunnel. I do, too. I also use requestpolicy to block external requests. I have firefox erase cookies and history each time it shuts down, I use a custom hosts file to block everything that tries to track me, I fake referer and user agent whenever possible, my macromedia folder is symlinked to /dev/null, I competely disabled the “smartbar” and search autocompletion and even then I am not sure that I didn’t forget something.
Almost nobody does that, or would want to do even half of this , because of convenience. At some point, it appears to me that just for convenience reasons, having an autonomous application is much simpler than checking and activating all those filters.

Lets just look at simple scenario.
People have their ‘smartbar’ in firefox with the default behaviour, search autocompletion on.
They have ublock origin rule, anew version of the launcher that proxies only .safenet links, and prevents crossed origin request. They even have plenty of other clever filters that I don’t think about.
Now they want to visit “http://politics.safepage.safenet” . They type “http://politics.safepage.safeneY” in the url bar. ( ← notice the typo ).
What happens now in their smartly safeguarded browser ? Browser gets a dns lookup failure, triggers a Google search. As our user has a pile of google trackers in their browser’s drawers, Google Inc correlates this search with their identity, and sells this shiny piece of political information to whoever is willing to pay the price.
See what I mean ?
I do agree that the browser way is the only way to create mass adoption. But it can not be trusted, and puts all the beauty of the Safe concept to square one in 99% of users cases.
On the other hand, a simple, well documented, reviewed opensource program , does what it is asked , and does only that.
This is where my concerns come from.

2 Likes

Other simple scenario.
Our user visits a Safe webpage. Everything is under control, no external requests. Then they click on that link that points to a clearnet server. As they never heard about referer before, browser is set to send the clearweb server the referrer for their origin, and the distant server learns that our user comes from the safenet webpage. They mix that with whatever cookie or macromedia evil trick, and they correlate the visit and identity. Privacy is broken to pieces again.

1 Like