I think my position comes from the idea that the code of an autonomous application can, and must be entirely reviewed.I assume the code is open. The author knows exactly what they put inside, and the community can check what happens under the hood. People can get the source, check integrity by a checksum, be sure of what they run and have solid confidence.
In the case of an application run inside a browser, things are quite different, as even if the applcation source is well known and reviewed, the code for the browser is so huge and complex, that I have serious doubt that anyone still has an idea of what happens behind the curtains. The browser has a variety of ways to leak things, to make unwanted connections, to store tracks and logs of every kind, scripts can access stored evidence afterwards, send history or cached content where they like even long after the web app is closed. The code evolves so fast that even the most trained and competent expert does’nt really knows what really happens inside. Specially in the latest generation of developpers, who don’t seem to even get what privacy can mean.
I would love to be pointed to some firefox or chrome documentation or analysis that shows where and how many the weaknesses are, and that would give me evidence that once you duct tape , say 3 holes in firefox, there are not other holes to be taped. How can you trust such a pile of self updating scripts and plugins , when Chrome for instance is written by the world champion of data harvesting ? Someone tells me that we have a precise and exhaustive knowledge of the behaviour of this browser, and I may think differently.
My other concern comes from the fact that almost everyone use their browser without any sort of safeguards. You said you use a dedicated profile, ublock origin, and a VPN tunnel. I do, too. I also use requestpolicy to block external requests. I have firefox erase cookies and history each time it shuts down, I use a custom hosts file to block everything that tries to track me, I fake referer and user agent whenever possible, my macromedia folder is symlinked to /dev/null, I competely disabled the “smartbar” and search autocompletion and even then I am not sure that I didn’t forget something.
Almost nobody does that, or would want to do even half of this , because of convenience. At some point, it appears to me that just for convenience reasons, having an autonomous application is much simpler than checking and activating all those filters.
Lets just look at simple scenario.
People have their ‘smartbar’ in firefox with the default behaviour, search autocompletion on.
They have ublock origin rule, anew version of the launcher that proxies only .safenet links, and prevents crossed origin request. They even have plenty of other clever filters that I don’t think about.
Now they want to visit “http://politics.safepage.safenet” . They type “http://politics.safepage.safeneY” in the url bar. ( ← notice the typo ).
What happens now in their smartly safeguarded browser ? Browser gets a dns lookup failure, triggers a Google search. As our user has a pile of google trackers in their browser’s drawers, Google Inc correlates this search with their identity, and sells this shiny piece of political information to whoever is willing to pay the price.
See what I mean ?
I do agree that the browser way is the only way to create mass adoption. But it can not be trusted, and puts all the beauty of the Safe concept to square one in 99% of users cases.
On the other hand, a simple, well documented, reviewed opensource program , does what it is asked , and does only that.
This is where my concerns come from.