Safe Launcher Security

Continuing the discussion from [WIP] Introducing: Ghost in the Safe – publishing/blogging webapp for safenetwork experiment:[quote=“happybeing, post:11, topic:9477”]
@cretz how about creating a little warning JavaScript so we can encourage all SAFE Web App Devs to include it? It could operate like the Cookie Warnings we now see everywhere - a little banner saying “Warning blah blah” and linking to a security how to on the SAFE network. This would explain the need to use SAFE Browser (or whatever) and what the risks are if you don’t, when using web apps.

Creating new topic as requested, but I don’t know that this needs to be rehashed any more than it already has on the forum. I try not to become a part of what I often see as aimless bloviation on the forum.

As for safe launcher, I do have opinions. I’ll just jot things down here as bullet points:

  • Previously I opened a JIRA issue about making the API not accessible via webpages by default, but in retrospect I should have said the ability to access it from the user browser (i.e. CORS support) should be completely removed. Why - Because any site on the internet can now access a local nodejs web server…bad bad bad. You should not even be able to opt-in to this.
  • The web proxy should be a separate app. Why - There is no benefit to it being in the launcher, it is a perfect example of what an app should be. Also, many users won’t need/want it I presume.
  • The concept of the PAC file and system proxy should stop being encouraged. Why - Besides the huge XSS and deanonymization issues, it also encourages apps to be developed side by side with the public internet.
  • People wanting to leverage HTML/JS as their frontend should do so with an app. This can either be some kind of Electron/nw.js/sciter/whatever thing or a local webserver if they really want to expose their things over HTTP. Otherwise, they can wait for some kind of safe browser or something.
  • I acknowledge that we are in testing and some of these things are to make tinkering easier. But it has gone beyond that to people putting in considerable effort around these bad practices. Tinkering and playing is always encouraged so long as developers understand that making safe-based user-browser webapps will require the dev to rewrite some of what they are doing or accept that they require the user to [hopefully explicitly and not-easily] opt in to exposing themselves
  • All software developers realize, especially when it comes to APIs and other programmatic base abstractions, that oftentimes the first version sticks and future [post-MVP] alterations cannot break backwards compatibility at least for a really long time.
  • As a community we should acknowledge that safe apps may not be as easy as developing webapps at first until foundations are built to make that the case. We should also acknowledge that the absence of that foundation now (be it a safe-specific browser or libs or whatever) should not excuse shortcuts.
  • Do not confuse these bad practices as something that hasn’t been built yet. These are intentionally developed/added insecure features. Some JS warning or whatever is not needed and sets a really bad precedent; the insecure features need to be removed or at the worst opt-in as a separate app or dangerous looking setting.
  • Upon actual MVP release, all of the effort put into the low level libs can be for naught if the security is abandoned in the higher level as you get closer to the user. This has negative PR affects.
  • I respect Maidsafe’s decisions whatever they choose and these are just my opinions.
  • I don’t want to discourage any app development or any tinkering, I just want to caution devs.

That’s my two cents. Please don’t perceive it as a negative towards any of the hard work done.


I know these issues must be apparent to MaidSafe but I’m not clear on their intentions. I agree with @cretz that we should try to look ahead to where we want to be, and to try and set the trend both in hard coding things like the Launcher API, and in guidance for developers. If all Chad’s concerns stand up, and they seem to with my level of understanding, we seem to be risking a false start in this area, and I think this matters because web apps are going to be very important. They leverage existing skills and so will be attractive to many developers, and they have other advantages over stand alone app options, including things like electron/node etc. - so lots of pure web apps will be built unless we make it easier to do in other ways, even if that’s just by not supporting web apps in Launcher (except by some future mechanism such as a SAFE browser).

@Viv What is MaidSafe’s position in this? Where do you want us to get to? :slight_smile:


Being a framework and one the fundamental goals being security we’d certainly not want to suggest options that would lead to compromise in security at the client end and at the same time not sacrifice UX completely. Personally thats why I’m interested in such discussions and also see how such discussions bring in end users UX in scope.

For one even if the proxy server didn’t exist in launcher and the web access to launcher was disabled at a first party basis in launcher, being a framework it’s just a hurdle until an app chooses to do the same. At that point that app can essentially be authenticating the user too and be a different launcher. Now if the general opinion is such an interface should be blocked, would be worth confirming how we plan to achieve the same. or it becomes a simple task where the launcher doesn’t expose a proxy server or sorts itself and it just becomes a standalone app that users choose to install.

At early days when a SAFE network only contained browser isnt available that might maybe be a popular app but once such a browser is available maybe it deprecates the proxy server stuff.

One thing I’m still a bit unsure about in this is things like:

Are we saying browsers should just not be able to communicate at all to the launcher directly whether its for public/authenticated requests? What if the same happened via browser addons or stuff which again become third party plugins.

If this is just very much a question of having these features bundled with the launcher, the solution can be as simple as extracting them and letting the App-Devs/their users decide how they want to use the network as UX is gonna drive a lot of such client app decisions I’d think.

I think building bridges to link the clearnet with Safe can be a great way to introduce a lot of people to Safe. Browser app may not be as secure as standalone apps but they still improve the security of user’s data tremendously.

I think education is key. For the users and for the developers. Let’s not throw out the baby with the bathwater.



It’s quite clear to me: with the PAC they provide a (quick and easy) entry point to the *.safenet for browsing. I mean without it, there wouldn’t be any way to browse the network at the moment at all (the firefox plugin essentially loads that pac file, too, so…). So, considering time-constrains, this is a very feasible solution to provide access to a broader audience.

And sure, there should be a dedicated safe-browser – no one questions that. But it is also a big piece of work and the network isn’t nearly in a state to provide that yet. If you want that, just go ahead and configure your browser like @Powersign explained you to do. But be aware that this will make the browser completely unusable for any other internet activity. Considering the limited size of the network at the moment, and that most people use one browser, this is a considerable problem and would slow adoption.

Either way, the described “problems” caused by that – like still having facebook or google analytic tracking – are privacy issues not security issues. I want to have that clear because the mixing of these terms isn’t helping. And these issues are caused by the way browser operate nowadays and not by safenet. Sure a safenet browser should prevent that from happening, but as anyone can access the network from anywhere at anytime (even browsers that aren’t the official browser) and anyone can still put this content into the pages they host, there is no way to prevent that from sometimes happening by the network but only through the tooling around it.

And to answer @cretz questions from before:

No, you did not. I am using a dedicated browser (firefox profile instance) exclusively for the safenetwork worker and browser, where I never log in into facebook, github and the other social thingies. It further blocks that request using the uBlock Origin addon and if even if you’d do some simple ip-tracking, my VPN tunnel will probably make that useless to you.

And nothing of that has to do with webapps or my app in particular (where this discussion originated – why I am not sure). Sure any (web)app can publish any content on the network and if your browser (environment) acts as they classically do, then you may fall into these privacy traps – not security, but I’ll get to that later.

I want to respond inline to some other broad claims made here:

Why? Browsing isn’t a problem, they can read any public content from anywhere without any problems. Heck you could throws ‘crust’ into asm.js and run it inside your browser and don’t even need the launcher to access the network at all.

The only thing the launcher provides more than access to the public content is access to private content – but only through an oauth-access-loop that the user has to authorize. The same loop, may I remind you, that any local app also would have to go through. This is and shouldn’t be any different between web and local apps. Of course this doesn’t protect the user from bad actors – any app that has been authorized can than abuse the data as it pleases.

Which brings up the actual point of security, which I believe to be reverse than you describe as it is much worse with locally running apps compared to webapps. While a browser sandboxes the app and – in a hardened environment where neither non-*.safenet-requests nor websockets are allowed – protects your data to stay in this app/session, any locally running app has all system resources at disposal to do with the data as it pleases: store locally, send via network to any other system, print it out, encrypt on the filesystem for later usage, whatever it likes.

A webapp does not have these capabilities, or at least they are all under control from the browser and the user can just clear their session to prevent them from staying on the system.

While I agree there should also be a separate web-proxy-app, which doesn’t require a UI, there is no harm in having it in the launcher, too. It makes distributing the setup much easier, as people only have to start one app. And as you can use the web-proxy without ever having to sign in or up, there is literally no harm in bundling them. What is your presumption based on? I’d rather argue that it is clever to put them together because it allows for an easy transition from being a mere observer (using only the proxy feature) to become an active participant in the system. All you have to do is sign up in the launcher and then you can use that ghost in the safe webapp you just started to host your own blog. Awesome – publishing hasn’t ever been easier!

What XSS issue? Across-Domain-Scripts should be prevented by your browser (standard feature, even opera mobile does it). And that is really not a problem that providing a proxy causes…

What? Where? How does it encourage? It makes it possible for the moment, agreed. But the few people, who are actually building things now are distribution and privacy-hardliners. Have we even had one case of someone adding facebook tracking or google analytics to their maidsafe site? While I agree that this the PAC-file should be – in due time – prevent non-*.safenet-urls from working – and a statement that this will happen from the core-team might be a good signal here – there is certainly no encouraging happening by having that this open at the moment.

You are not making any point why the system should be stopped. It can easily be updated to prevent said problems.

However, I now finally see where your idea came from to bring this up on the webapp-thread. Many Ghost-Themes come with these things build in – although you can’t actually configure them within my App at the moment.

Again: why?

You make these broad claims, that somehow, local apps are supposed to be more secure and better than webapps when they’d be loaded from said web and executed in your browsers sandbox. I already explained the reasoning why I believe the opposite to be the case before, so there is little point in repeating it, but there is another great case:

If you have a (hypothetical) document-leaking-programme you had to install on your system when police confiscates it, that would be enough in many (not so democratic – including the US) countries to keep you in custody for a very long time. While on the other hand if you were running it solely from that maidsafe web, in a privacy-mode-firefox window, after closing your launcher and Firefox all traces will be gone. And those can be execute in any internet café, school, library or at work, where you don’t have the possibilities to install programmes locally.

If anything, I am excited about the possibility to provide all apps from within the network and give anyone the possibility to run them anywhere and without a trace (which will greatly improve once WebAssembly arrives). This protects their privacy more than anything we’ve ever done as human kind.

What? Why. If anything, safenet can and should leverage the easy of software development as webapps. You’ve still not given one actual reason or security concern against them. I also don’t see that “absence of a foundation”. I’ve build ghost in the safe in 10 days, part-time. And it doesn’t do any of the sketchy things you claim webapps do. I agree that in the future all this should be hardened for privacy but that has nothing to do with webapps. If they rely on google-analytics, then they will break soon – seriously, if just enough people browse safenet with ublock origin for now, there is little point in even trying to use these sketchy techniques.

Which “bad practices”? Doing web-development? Or doing the sketchy things within that? Again, still waiting for actual proofs of those being implemented – and being more abused by web than locally. Otherwise calling them “practices” is quite a stretch of meaning and “hypothetical possibilities” is a better one …

Who says that? And what proof/data is that statement based on? As a Software Developer on and off the Web I can attest you, you can do way more harm on the system itself than from within the web-browser-sandbox.

Idea: While writing this, I came up with an idea. Maybe for the time being a fork of uBlock Origin which blocks any non-*.safenet-URLs on all *.safenet-websites would be sufficient. You could still browse the web as before but can be sure to not leak any safenet info outside of it. And secondly, maybe offer an official global proxy (network) that people could use for browsing without a local launcher. Hmm, the first might be a thing I could investigate …


I agree. My statement was too broad. It was regarding the threat of XSS attacks and deanonymization. It’s obvious that if a native app goes rogue, it can do much more harm.


It is a bad solution subjecting people to a default platform of cookie tracking, cross site tracking, and deanonymization. Please take a look at TOR deanonymization approaches to see why…I’ll go throught the rest of this post.

Slowing adoption is better than insecurity. I have even provided a simple proof of concept at GitHub - cretz/safe-poc-browser: Simple Proof-of-Concept Browser for SAFE Network and have vowed to make it a full blown app (GitHub - cretz/shrewd-old) once the next backwards incompat API revision comes through. Why can’t you avoid my regular browser?

Speaking for myself, I will never run the launcher as it is right now (with the CORS headers on by default) and when I do run it I will never use the web proxy. If I have an app that needs to proxy safe stuff via HTTP easily consumable by a browser, then I’ll make that (which I never will personally).

Not only is this wrong, this is incredibly dangerous dogma. Deanonymization such as tracking is a big security issue. Privacy issues are security issues in a system that offers anonymity. The TOR project and many others consider these real dangerous security threats, why can’t we? That doesn’t even talk about the surface area of the local nodejs API anyone can hit which is an unnecessarily exposed surface area

Yup, exactly. Just don’t expose a localhost web server to the API and don’t encourage use of a proxy with people’s browser, that’s how you prevent it.

Others may not. This is about protection for all including non savvy users. We should not promote the vector even if you are wise enough to protect yourself. Just because you have a browser plugin that blocks that and a VPN does not mean others do. It is very important as developers that we understand our responsibility to the masses here.

Browsing is a problem. Mixing HTTP and safe traffic is a problem, even if it isn’t a problem for you and your protected setup.

That’s like saying the TOR browser should allow onion hidden services access to HTTP because you can’t protect from bad actors. You have to mitigate risk, it’s not a well-bad-things-can-happen-oh-well issue. You educate the user and you harden your ecosystem.

Yup, I promote this. Are you going to harden all users systems? Again, if we can protect the user we should.

Definitely. Local apps are not more secure than webapps in general. It’s the mixing of HTTP and safe network content so frivolously that can make it not as obvious. Sure if you let me install an app I can install a keylogger. But there is a clearer line of demarcation.

Can I have my app bundled too? In general, I believe it should not be bundled because its use should not be encouraged for the many other reasons I have stated.

Browsers don’t block <img src=“” /> natively. In this case, XSS would mean that a less than well-coded safenet site could allow HTML injection that could hurt the user. This risk is increased by including HTTP content (especially without CSP headers which I have called for too). With a properly sandboxed setup as I propose (instead of asking the user to harden their system) you cannot do anything nefarious with injected content without my explicit approval given to the app.

And I didn’t even speak to the CSRF issue. Wait until the first popular in-the-browser safe app that leverages the URL to do something on behalf of the user is linked too…ug, I can see a nefarious HTTP site linking to http://mysafeaccount.safenet/deleteUser?id=123. Granted CSRF is mitigated by smart web devs, but it can also be helped by not sharing w/ HTTP. Granted the vector is still open from safe to safe, but normal CSRF prevention rules should normally apply (i.e. not doing mutable things based on a GET) but then again you can’t handle a POST vs a GET via your browser-only app so you have to be more careful than you otherwise might were there a server side component here.

Yes. I have seen safenet sites with it on the first round. You cannot expect webdevs to know any better and you can’t expect users to either. The system must offer all the protections it reasonably can.

Yes I am. I look forward to maidsafe’s first audit. “We are an anonymous system”…um, no, you are not because you chose to actively expose users to the public internet (yes actively, they put the web proxy in place and put CORS headers in place)

.That is my point. It can be easily updated to prevent said problems, that’s why the current approach should be stopped.

I have brought this up everywhere I can for many months now. Your thread was not unique. I am trying to discourage users from expecting that users of the system are going to run the web proxy and the launcher in every-website-can-see-me mode. I personally didn’t look that deep into your app and admit that my hypothetical probably wouldn’t work to well on your app since I assume you escape your output (but there are still problems there too).

They aren’t inherently more secure. The argument is akin to why competent auditors say client-side JS crypto is bad. is the commonly referenced link on this. The browser that you share your other activities with is not safe. You do not have it sandboxed. If you did have it sandboxed, fine, but most people don’t. You are basically telling the TOR team that they should use the user’s regular browser. It is actually very poignant to bring up TOR since this project makes the same guarantees. The same reason that people have a TOR browser applies here with all of its justification. If my app wants to use TOR, you can download it and run it and it’ll connect to the SOCKS proxy…does that mean that my app is safer than your browser? No. Does it mean the TOR browser is safer than your app and mine and my browser? Yup.

If you keep it all within the network, I agree. If you mix traffic and reuse a user’s browser (whether or not you as a tech savvy user don’t use the same browser) it is bad.

Sigh. If you ignore deanonymization because you don’t consider it security but the rest of the anonymity-preserving projects do, and if you ignore that you have an exposed local web server accepting requests from any website in the world on your local machine via cross-origin, then yes I have not given any reasons. Ease of development is important. HTML/JS platform is a very easy platform and should be used, but that doesn’t mean you have to reuse the browser or bind points to localhost that anyone can access or mix traffic. You can have ease of development and security.

So when I install my own Piwik server and have it track you, is that in your ublock filter list? There is so much history here on TOR deanonymization attacks on correlating user activity that I don’t think it has to be repeated here. Maidsafe has a chance to be better because it is immutable by default and requires users to provide permission for apps to do anything that could store something about them. Once you allow HTTP on your safe sites, you’ve destroyed that.

Bad practice 1 - using cross origin headers to open up a locally listening web server to all websites I browse. Bad practice 2 - turning it on by default with no way to disable. Bad practice 3 - turning the proxy on by default. Bad practice 4 - asking users to alter their proxy settings and encouraging a setup with mixed HTTP and safe traffic. Bad practice 5 - not providing CSP headers on the proxy to at least prevent mixed HTTP and safe traffic. Bad practice 6 (which you disagree with) - encouraging users to use their everyday browser with safe. There are many many more were I to sit and enumerate them all. Though I have a feeling the first audit will do it for me.

Right, but what if the HTML/JS platform was given to you without all of the mixed content and all of that? It’s not that difficult. If I say that my browser that blocks HTTP and access to the launcher web server is safer, then aren’t all you really saying here is “well, we could be safer but it’ll take too long”?

I promise common users don’t know how to open browsers with multiple profiles and what not. They will use their everyday browser. If the proxy used content security policy headers in the very least, this would already be accomplished.




That is correct. This is just the difference between maidsafe promoting it and not, that’s all.

I am 1 bazillion % saying that. I consider a plugin no different than an app in those cases…local machine access gives lots of rights. This could be the only project I have ever heard of in my life that automatically opens up an API on localhost for access from the browser. Even if you don’t auth, I can beat it down with just requests…I hope the launcher API entry point code is thoroughly sanity checked, because as soon as this becomes known, every nefarious HTTP site out there will include, in its hidden box of JS goodies that it runs, attempted calls to see whether localhost port is opened. Big bad vector…evil.

I am going to make this as clear as I can: I will not run the launcher in its current insecure state, and if y’all launch an MVP with a localhost opened API and encourage users to change their network proxy settings (or do it for them) and have them use their browser which can contain safenet sites with mixed HTTP and safe content, you will get lambasted by the community.

It kinda is. But they are going to choose the easier route. This is akin to saying “well, we’ll leave the authentication up to the user, if he wants only a single factor auth instead of the MFA that is provided now, that will drive the decision”…no no no. Sometimes you have to secure the user at the expense of their convenience, ESPECIALLY if you are developing a platform to build upon where others can make app development easier and easier. This is one of those times. Convenience should not even be a question here…go tell the TOR team that having to download a separate browser is inconvenient and that you want to use your current browser with your plugins.

Sorry I am writing so much on this. I apologize if I am coming off harsh. I am just very worried as I see more and more in-the-user’s-everyday-browser use of SAFE which I was hoping would not become a pattern. It is such a huge issue to me that I will not develop on SAFE if SAFE sites are promoted for viewing in the user’s browser OR if the launcher API web server is open to public websites.


As a web developer being able to access the users content stored on the safenet on their behalf is a crucially important thing. I am very happy having this provided through the api.safenet-endpoint rather than connecting to localhost (makes it also much cleaner imho), but I am not sure what the actual difference is between the two at the moment (is there any?).

Whether that is provided by the launcher or any other part within the eco-system is not really that important to me. The launcher just feels like a “natural” place to have these authenications handled.

I am not concerned about many of the issues raised here about the launcher providing it. I explained how the supposed cors-issue is largely circumvented by requesting a user-authorization before hand and that most issues brought up against web-apps (as a general thing) are bogus and have little to do with the setup in question in the first place. Providing any kind of programmable API (and the launcher offering the HTTP-API with proxying-support is really just that) to local and web-apps needs to take care about rate limits and such anyway to make sure bad actors don’t bring it down easily. That is a separate issue though and one that any API will face.

Similarly I feel that bringing up “XSS” as the all-on-evil-from-the-web is hugly simplifying the world-view here: any programme that executes code entered by the user will be prone to this problem. Remote-Code-Execution is a problem in all development environments, but in the web it will at least be contained to the app in question, rather than taking over your entire system (as it could for any locally run app) – the key logger example mentioned elsewhere is simply not possible via the web. Mixing and matching the problems mentioned here as supposed “problems” that only the web and the launcher providing to the web face is hugely misappropriating them. Considering that these supposed problems are claimed against an API-change that doesn’t help to protect from them but will cause huge harm to the web-app-development-ecosystem is not constructive. I will sustain from continuing this discussion.

I am however deeply concerned by the mixed-protocol-problem and think that needs to be addressed indeed. While I understand the choice made for now (as stated before), safenet can not accept a system that allows such easy data-leakage.

And I was unaware that I unwillingly contributed to that with the Ghost in the Safe App: even in its default theme (‘decent’) this currently pulls in a font from google-fonts via HTTP :frowning: . Thus allowing Google to track (and we know they do) the source of the include, the font and even if it is only the IP address, if they can link it, they know the article you’ve just read. That is indeed unacceptable.

This is a big issue indeed and one that I as a web-developer am very concerned about to get right from early on. Providing sane defaults from the core team and promoting good citizenship is a good starting point. Scraping the web-access to the content is not an appropriate solution however.

Though I like the proxy idea I am afraid that in its current state it is too generous indeed. We really need to strictly restrict access to *.safenet-pages to *.safenet-access to ensure the privacy claims hold true. I understand that isn’t easily possible in the current setup (at least if you want to allow the users to still use the normal web with the same browser). I, for one, will enforce that from now on with the following uBlock Origin Rule (which should work in any browser): blocking any non-*.safenet-URL on any given *.safenet-URL (you can add that here, if you run uBlock Origin, too)


Maybe instead of providing a general proxy-file, the team should provide a uBlock Origin Configuration for now? I am happy to continue the discussion on how to harden the privacy concerns of the browser users without having to sacrifice browser based applications (that is an unacceptable and unnecessary trade-off).


Also a proxy switcher like switchysharp or similar can ease the use of safe in this way. So a safe only proxy for safe sites that will not allow clearnet is a start I suppose. The ultimate answer here will ultimately be a safe browser (hacked brave perhaps), but until then we can compromise between hassle and ease of use, but then not safety?

I am sure there is a lot more to be said here, but just throwing in the proxy switcher as an additional tool to perhaps consider.

Then a safe plugin for browsers that handles all this. Then move to a SAFE browser itself?


The idea behind the api.safenet endpoint was to make it a standard endpoint for consuming APIs for the browser based clients to make it work across platforms. When we move to mobile (say android), the launcher can be a different implantation using IPC services (just for discussion sake). If we have a standard endpoint for invoking APIs, then it would make it easy to get the same client code to work cross platform.

I think @cretz is also not against web apps, but mixing clearnet with safenet.
We had the proxy in place just to support the browsers and to avoid the pain of having extensions (has its own set of limitations) for every browser platform.

I hope @cretz wants the proxy to removed and support web apps only using a SAFE browser. Correct me please, if my understanding is wrong.

I do agree with this.

IMO, a custom browser would also follow the same standards like the endpoints. The proxy right now is an emulation of the logic that the custom browser would actually have in future, so it shouldn’t break the UX on how the sites are being accessed when ever the user makes a switch to a SAFE Browser.

  1. Build a SAFE Browser (hacked brave perhaps :wink: )
  2. At the same time, dont remove the proxy. Leave the proxy off by default.
  3. If the user wants to access the web using his standard browsers then he would have to turn on the proxy.
  4. [quote=“cretz, post:1, topic:9488”]
    dangerous looking setting
    When the user chooses to opt for starting the proxy at the point we can prompt the security warnings and leave the decision to the user

I do take the CORS on the launcher as point to improve @cretz.


Correct for those that want webapps


Sounds like we have a candidate for consensus! As in a potential target and the steps towards it.

Anyone feel willing to summarise? Then we can see if people still have issues with it or not. I could have a go but think it would be better from someone who actually understands this better.

I already feel this has been very helpful because if I’m right we do have a solid solution on all sides. But as I say, I’m not sure I understand it well enough to be sure! :slight_smile: Edit: and my beer is getting warm in the sun on the front of my boat.

1 Like

Thanks for explaining. That makes a lot of sense and sounds like an excellent approach. I think the Launcher/Team should encourage/enforce the usage of that endpoint (rather than ‘localhost’) more. In accordence with

Maybe only respond to API-Requests with a CORS-Header only on that endpoint and only if the request originated from a *.safenet-domain and block/respond with 401 for all other Browser-Based requests (looking at the Referrer and Origin-headers) would already prevent the other-website-scooping-problem mentioned earlier, but still allow local apps and within-session-web-apps to run. Actually, if you’d accept a PR for this on the launcher, I’d be happy to implement it!

I know there is a strong sentiment for having control about everything, especially the browser. However, asking the user to install a different browser and disrupting their behaviors and patterns like that comes with a cost. The Tor browser simply isn’t as comfortable as Firefox or Chrome – and neither will be this fork. Or simply put: most people won’t do it. Installing a browser plugin or extension however (one that allows access to safenet-urls, implements the API endpoint and blocks all clearnet-traffic for those websites hosted from safenet) is much easier to do (see the rise of adblockers) and less invasive.

Asking a normal person to switch the browser or having to switch on a red-flagged-don’t-switch-this-on-feature in the app just to surf that web – and then not even be fully protected – is sketchy. I’d rather drop the proxy as it exists in that case and focus on a way – aside from having to install a special browser – to allow good-defaults-on surfing on that web.


Yes I agree and somewhat question this as well. I suspect a packaged browser may well help many folks easily “get it” but also a solid implementation of a browser plugin for safe that does what we want (as in the ublock + proxy switcher etc.) would also be good.

Perhaps the answer initially will be what is quickest, or perhaps we have contenders for a decent bounty program here ?


Now it looks like the experts are in agreement with the points you’ve made here, but one statement stands out for me as an outsider. I struggle with the idea that there could be daylight between privacy and security.

Now maybe for an expert that is the case, but I am concerned theses become almost identical in the more important general use case. I see that one might lead to so called “identity theft,” or worse loss of data and liabilty and even loss of life, but in aggregate the hardline is loss of privacy, where its needed, leads to the same on a grand scale. Its not just lack of privacy, it gets rolled into persistent surveilance for control. I am not trying to put words in anyone’s mouth, sorry, but how do the SAFE people do any kind of trade off on privacy even early on. It could work out like Tesla having a too many crashes in its autopilot beta.

Let me answer this, as I am the one, who brought up there being a distinction. In software/app development there in general a distinction in the understanding of these terms to help us develop appropriate solutions. The difference I mean goes appropriately along those lines: security describes the system integrity, whether a system can be manipulated to do something it isn’t supposed to do or – in the sense of data security – you can gain unauthorized access to functionality or data. In contrast to that data privacy concerns itself with the (mis-)conduct when handeling that you can authorised access to. A simple example is when someone hacks linkedIn and stealth the emails (which is a security breach) in comparison with a company asking for your email but then selling it illegitimately to a third party (a breach of your privacy).

This distinction is helpful as it shows the limits and where each one can be solved. While security is most of the time a coding/code quality and proper processes issue, data privacy is much more about a proper code of conduct with the data you gained access to – which is much harder to proof and enforce from a systematic perspective. (Which is why I am happy that the safenetwork takes on that challenge).

This particular instance – allowing of mixed-protocol-content – is clearly an edge case of overlapping security breaches and privacy data leaks. The system we’re designing here isn’t supposed to leak this information (this easily/without a bad actor) and thus such a problem breaches the integrity of the system itself. However, there isn’t any specific problem in the launcher that causes the launchers integrity to be compromised (and thus the title is misleading) but rather the way it is interacting with other parts of the system easily leads to a breach of the conduct expected for privacy relevant data within the system – it’s complicated.

Moving on from that, one important conversation I think we should start having – in a separate more appropriate topic – is how we want to grant “access” to apps within the safe network. Clearly the system of feature-based-access-control like iOS or Android do it, do not protect the privacy of the user as they completely regard what the apps actually do with that feature and when. I am dreaming of a system, where an app might ask for permissions to send messages and uses an API to do that, but where the user has complete control over when and to whom such messages are send (as an example) and the app itself will never know whether the message was send or whether it was just faked to it. Thus every app must be able to continue its work without expecting any of those features per se but in the idea of progressive web pages, provide them as further enhancements IF the user allows them to be used (which can be revoked entirely or even on a case by case bases).

But again, discussion for another day!


I would say there is no need for the PAC file, I don’t use it .
Just tell firefox or any browser to use as a proxy for any protocol, and you can access the .safenet network until you change that setting.

I wonder why the launcher DOES proxy http:// requests for anything not .safenet, though. It blocks https, but not http. Anyone knows why is this ?

Thank you, I really like your proposed solution.

1 Like

I’d like to dive a bit more in this subject as I see a lot of potential for layering Safe on top of the public internet in ways that could make a real difference, right now, for a great amount of people.

I talked previously about the concept of Sidetalk, an app that allows anyone to have a conversation linked to any url or XOR addresses. While it is possible to do this kind of app as a native app, it would be much more practical to have it as a browser extension. You navigate on the public internet, or on Safe, the extension’s icon lights up when it detects an ongoing conversation for the current address, you press it and a new tabs opens up on a safenet site where you can take part of the conversation. Sanitize all users input to render ineffective any link to the public internet inside Sidetalk and, as far as I understand, users privacy and security is maintained.

Another example, SafeGraffiti. A browser extension that allows you to paint over any website (public or Safe) and share it with the world. Again, the extension checks as you browse the net if these graffiti exist, if they do, the icon lights up, you press it and the extension overlays the graffiti over the website. Again, privacy and security is maintained as the data from the user are just drawings with coordinates.

These kind of apps are almost impossible to do right now, Safe opens up a whole new domain of possibilities at the finger tips of every developer with any kind of budget. This has the potential to make real changes.

Two more: allow any shop on the public internet to accept Safecoins as a payment, or even, allow any website to authenticate its user with their Safe id, etc.

From an adoption point of view this also allows us to leverage the immense quantity of content available on the public web to create experiences that are unique to Safe. It also helps a lot with the chicken and egg problem we’ll have without it. Attracting more people to the network increases security for everyone and makes the network that much more robust.

I understand this subject will be polarizing and I’d like to hear other thoughts on it. Worth it or not?

Full disclaimer: I’m biased as I work on a simple extension that allows you to write personal notes linked to an url. It’s nothing fancy and pretty harmless, it won’t overthrow any government. But compared to other similar extensions, you own your data which is a big step in the right direction. It could work only for Safe but to allow it to work on the public internet makes it just much more useful.

I can think of other simple example like an extension to store your bookmarks. Again it’s pretty harmless but it allows you to subtract yourself from all other commercial cloud services.

Anyway, thoughts?