A Secure OS for the SAFE internet is the focus here, without which SAFE can never be really safe.
Computer security (especially mobile security) is abysmal. SAFE is an important part of the solution, but the NSA can still just hack into your phone and steal your password as you type it. We need to really own the box, and Genode could complement the SAFE network by serving as the foundation for that.
Think along these lines:
- your webcam, microphone, screen, keyboard, storage, GPS, wifi, network in general, etc. are literally nonexistent to apps (and even āsystem services,ā whatever that means) that were not explicitly granted access to it
- you can set up an app in a way that access to e.g. the webcam or the microphone is verified every time (i.e. through an unforgeable popup); see a more detailed example near the end
- passwords are always entered through a very simple and very secure custom keyboard app, which is verified and trusted; you can use Swype or whatever you want for everything else
- thereās no way for an app to sit between others and the display module without explicit authorization
- resource limits can be enforced at every level (e.g. the outgoing data bandwidth used by a messenger app)
- hard real-time scheduling
Enter Genode
Genode is an insanely awesome project Iāve been (very) loosely following for a few years.
NOTE: I had to edit the following paragraph to correct a confusion I had about the structure of Genode. For more detailed info, check out the book (from the bottom of page 11: āClean slate approach.ā)
It is an āoperating system frameworkā that sits on top of a secure microkernel (e.g. seL4, another personal favorite) and orchestrates a hierarchical network of tiny modules, each providing a single service for those above while using the smallest possible set of modules from below while being responsible for a certain amount of resources, which it can multiplex between its children, or trade with other modules as part of acquiring or providing services. Modules can communicate only with those other modules that they are explicitly allowed to (the concept of āleast privilege.ā) This can dramatically reduce the trusted computing base (āattack surfaceā) for a given functionality, e.g. an encryption module or different stacks (networking, VFS, etc).
(Here's some more explanation and an example from the Genode site.)
> On Genode, the amount of security-critical code can largely differ for each application depending on the position of the application within Genodeās process tree and the used services. To illustrate the difference, an email-signing application executed on Linux has to rely on a TCB complexity of millions of lines of code (LOC). Most of the code, however, does not provide functionality required to perform the actual cryptographic function of the signing application. Still, the credentials of the user are exposed to an overly complex TCB including the network stack, device drivers, and file systems. In contrast, Genode allows the cryptographic function to be executed with a specific TCB that consists only of components that are needed to perform the signing function. For the signing application, the TCB would contain the microkernel (20 KLOC), the Genode OS framework (10 KLOC), a minimally-complex GUI (2 KLOC), and the signing application (15 KLOC). These components stack up to a complexity of less than 50,000 LOC.
Access is defined by capabilities, so if a module is not authorized to access another, it canāt even address (that is: see) it, which is one of the many reasons why capability based authorization is so cool (another one is that itās not subject to the confused deputy problem, like ACL based access control.)
Incidentally, this structure would also be perfect for running the SAFE vault and the apps, each in their separate little sandbox, but thatās just a small part of the deal: weād get all of the benefits I outlined on the top of this post as well.
Basically, we could have a single module, small enough to be virtually (or even verifiably) bug-free, to handle our authorization settings, and, as long as this module is intact, we could be sure we are practically invulnerable: nothing could access stuff we didnāt authorize.
So, if the NSA wants to spy on your camera, they would have to target an app with access to the webcam, and even then you could just default to paranoia and tell the camera module to prompt for authorization every time something wants to snap a shot. Even if they stole such a shot, itās unlikely the camera app would have access to the network stack, right? In fact, it couldnāt even see thereās a network! You want to share something on Instagram? There would be a tiny and verified IG API client module to handle that, with internet access only to the IG servers; no luck for the NSA again.
As a comment, let me note that this idea is a generalization of the āvirtual machineā, and as such it can be used (as in: āitās already implementedā) as a virtualization platform to give access to existing software, running on a variety of operating systems: