Redux OS + safe OS

Redux OS is written in rust, with unix philosophy. It has build in URI, and would be very useful for safenet. There is a thread about redux OS in /r/rust. I promptly responded back to the community below.

btrfs should be added second, imo.

Reason: Safenet (maidsafe ).
If you do this, it will greatly help safenet and redox OS. Btrfs is a suitable enviroment for providing resource to the network. It allows hot swapping without having to drop the vaults.

I have been thinking having a basic rust kernel that does several duties required to establish the ecosystem. More secured, and less leaks. Then have safe network provide the rest of kernel scripts, or apps. It’s like having a hypervisor inside of the safe network, and all of the apps are run from that hypervisor - anonymously.

Safenet has a really cool concept, unified datastructure. It is encrypted file that holds key / value. Scripts can be run when user access to the data. This means you could easily run additional kernel inside of the safenet. With your proposal URI, it would look like this;


Have some demon to run it in background.

Key / Value fits really well with URI, and other schemes.

All of the scripts that is run in safe does not effect your basic OS. This will remain your OS intact. If somebody managed to get your laptop, they don’t know what apps you’re running because everything you do is in safenet. It requires account information + pin. Which can be
access from anywhere in the world.

Also account creation and login happen only from Launcher, i.e. it is
the SPOC (Single Point of Contact) for the SAFE Network. This means
that no app gets to know user credentials and hence will never have any
knowledge of a user’s MAID, MPID and other keys and cannot tamper with a
user’s login session packet which would otherwise be disastrous as the
session packet contains all information needed to know everything about a
user on the SAFE network - signing keys, encryption/decryption keys,
data, etc

Think about in a business where there are 1,000 computers/users that
takes about 8gb for running window os. Thats 8,000gb. In safenet, it
only needs one copy and million of users could access to it. We could
save tremendously on footprint.

Anyways, I didn’t realized I turn this into a lengthy essay but I
hope this would give you a lot more creative ideas on what you can do by
using safenet for additional OS stuff which shouldn’t be run on local
machines for security reasons.

I figured to post it here to hear from the experts. I figured it would be awesome to run virtual operating system. This means every business or community could have their own OS.


I am personally still hopefull for a CoyotOS revival. To me SAFE is about privacy. Privacy is only achievable if you are confident your OS can not be hacked. The idea behind CoyotOS was 3-fold:

  • transition from monolithic kernel to microkernel
  • transition from access control lists (like file permissions) to capability based security
  • write the OS in a safe low level language amenable to automated or semi-automated code verification

Being smaller microkernels are likely to be a lot less amenable to hacking. For example your drivers would run outside of the kernel and wouldn’t have access to all the data on the system.

Capability-based security means that you can ring-fence parts of your systems and can actually verify “none of these applications have any access to these files at all” or “none of these applications have access to internet at all”.

By automatic verification I mean that it should be possible to use theorem provers to verify certain properties about the code written. E.g. to an extent it should be possible to prove that your OS code does what you want it to do. Or in other words you both write code and describe its properties in another language and then you have an automated process which verifies that the code actually has the properties described.

Rust is a safe low level language and one available today, however I’m not sure how amenable it is going to be to these proving techniques. Besides Rust is still under active development which may present a difficulty of its own for building verification tools. CoyotOS was to be written in BitC similar to Rust but more amenable to automatic verification.

Unfortunately right now both BitC and CoyotOS projects are on a hiatus. However I’d very much hope for them to wake up at some point. Until that happens I don’t have much hope to get a hacking-resistant OS. And even then you’d probably want to run on open-source hardware to ensure you can not be hacked from a level bellow your OS.

A lot of performance would need to sacrificed for hack-resistance as I’d expect both microkernel OS-es and open-source hardware to be slower than their traditional counterparts but I believe there will be people to whom it will be worth it.


You don’t really save anything.
You use a fairly huge amount of network capacity (think about a business where there are 1,000 computers/users that read about 8gb for running Windows OS!) and once you download it, any network disconnection puts you at risk of data loss.
Then, in order to fix that, one gets a great idea to install a small (e.g. 8GB) flash disk in the system, and saves the state locally, which returns you to square one. Almost, that is, because you still spend time waiting for stuff to sync up.

1 Like

Did you guys follow the Ubuntu snappy core project ?? It’s a read-only core system separated from the app layer through the use of docker/lxd containers.
I know Canonical shook things up a little from security perspectives with their dash but this wouldn’t be a problem if the system only uses the safe protocol…

It’s still a young project and has to mature yet I know…, just like docker has some security issues running (anyway some months ago…)


You do make a good point. I think it will encourage users to run lighter apps on network which likely to drop connections. Nobody would want to run 8gb monolith kernel.

Arch linux repository system is amazing, and user repository. It’s really nice build. What if we have repositories in the safenet. Each package has URI link. One could cherry pick each packages and only have it run at specific time frame or automatically. One could have three different type of OS; one for traveling os(selected jobs), heavy duty os(a lot of jobs / resources consumption), and work os(selected jobs). In this case, if you were in some random location, you wouldn’t want to run heavy duty os, or work os. Traveling os would make suitable for the task. After all, packages are just tools. Just well as the virtual os. Every tool has it’s own purpose. Run a highly sophisticated data that should not be shared to anyone, it would be wise to run it in safenet, rather than on local machine.

Okay so your OS and apps is 1GB.
With local storage, I have my 8GB on disk.
With your network download, you have to download 1GB every time you start your computer. Imagine the cost of bandwidth (or wasted time) for a 50 employee company which has to download their stuff every Monday morning. (Or keep the systems on over weekend to avoid having to do that).
Add to that the cost of lost time for employees whose system loses connectivity and crashes.


I’ve seen big numbers given for OS loading on this thread, but even after you’ve shrunk this to 1 GB it seems much more than I’d expect.

Can you give some links to actual in memory code sizes by OS/distro, because I don’t believe I’m loading a 1 GB Linux kernel when I boot. Maybe I am?

I often check that on Pi which I use.
Obviously you can use Pi (v1, which has 512MB RAM) as desktop, but it won’t be a pleasant experience, especially if you attempt to use office apps like Chrome or Excel, then you’ll quickly need 2GB.
In text mode Pi runs okay in 200MB or so, while light desktop with few apps breaks through 1GB (and you need a disk drive for swap file!).
You could use a server (VDI) to lessen the load, but in that case it’d be even more important to have protected storage attached to the server.

I’m not sure that really answers the question.

I mean, you’re including non-program storage in the desktop figure, and adding large applications (again including non-code storage) to get over 1GB.

I’m interested to know how much and how little needs to be downloaded to boot different OS. I don’t have time to investigate this, but without that information we can’t assess this idea.

This also dropped into my lap via Twitter just now, which is what brought me here. I’ve only skimmed it, but it seems it might be relevant:

Few MB to bring up the kernel and network stack is enough.
For lightweight GUI + Firefox you’d need to download at least 300 MB I believe.
But unless you boot from an on-board chip (e.g. using PXE), even for 300 MB you need to have some sort of media and for CD-ROM 700 MB is the smallest you can get (although it’d be better to buy the cheapest USB key, then you have a place to save data and eliminate the hassle of booting over the Internet).

Booting from the Internet means better security (assuming the boot image is kept up to date), but as long as any media is present, I think most people will prefer the convenience of booting from local media. Or even a local boot server (like Raspberry Pi), if there’s more of them and they have an IT guy who can support it. But how much value is there in that? Today enterprises buy VDI (Xen, VMware, etc.) because the cost of people is so much higher than the cost of s/w and most CTOs understand that.

OT: USB stick that can destroy your PC: This USB stick will fry your computer within seconds • Graham Cluley

Ok, so the OP is feasible and takes relatively little to boot, and nowhere near the figures you used to undermine it, even after revising them down by a factor of 12!

I’d still like to see actual figures rather than hand waving, but I think this shows that the idea has merit, at least for some cases.

But when applications begin being designed for this environment, and to be loaded directly from SAFE Network because of the obvious advantages for users - such as security and freeing disk they can use for farming - this could really take off.

I don’t understand why you, when you obviously do understand the technology, responded so negatively to the OP using such wildly inaccurate claims.

Well I didn’t “undermine” sizing, but left some “buffer” because you can’t use the most optimistic case, then deploy and only then realize you need to download an extra 200 MB which means your wait time doubles.

A simple example: Firefox DEB package is 72 MB, and on disk (just checked Windows) it’s 100 MB. If you need JRE (which an organization with “1,000 users” (quoting Grizmoblust) almost certainly needs), add 150 MB. As you keep adding, you quickly hit 1GB.

Bottom line is, it doesn’t matter how large or small the “core” OS is. Enterprise browser (Firefox ESR Edition) and Java are examples of the bare minimum and when few other essential enterprise apps (like messaging and groupware) are added, it would be suicidal to size for less than 1 GB download.

Edit: a casual check of the Pi Raspbian boot image shows 1.3 GB, and uncompressed 3 times as much (4 GB). The pathetic NOOBS Lite image is 760 MB (so uncompressed probably 2 GB). The cost of engineering to trim down what can be trimmed would be high and after Firefox, Java and few other basic enterprise packages are added, it would still be way over 1 GB (more likely 2 GB).
At the same time a Kingston 8 GB microSDHC Class 4 Flash Memory Card (which can run locally without the hassle, large bandwidth and risk of data loss and downtime) sells for $3.92 on

The reason your figures are so inflated is that you are answering a different question to the one I asked, without pointing that out, or why.

I’m well aware that applications will add to the amount needed to download, but you again ignore the point I made about apps built to run off SAFE, and continue to write figures for applications that were built with the “local disk is cheap and plentiful” ecosystem in mind. It’s easy to shoot ideas down if you make assumptions like this, but much harder and far more useful for the project, to make constructive contributions.

You consistently choose to reply making assumptions that either attack the proposition someone wants to discuss, or attack some aspect of SAFE network itself. It would be nice if your posts were constructive - offering encouragement and possible ways a suggestion might be made viable - or post something positive about SAFE network.


very fast data access to recent data and in some cases I believe faster than local hdd (ref: Popular Get Request - #4 by dirvine).

So loading popular apps and OS from SAFE may be comparable, or even faster than loading from local disk.

That makes the OP entirely feasible, and is a definite incentive for people to ditch their local OS/program storage and load stuff from the network.

Locally stored code is also vulnerable to virus infection whereas reference copies of code can’t be infected in the network, do there are also security benefits.

1 Like

Yes and on the network you are retrieving immutable chunks which are free of any changes (virus etc.) which is a nice secure issue. You can see where this will end up over time :wink: It’s why I keep playing with microkernels and load all else (including drivers) from a SAFE network. Seems to make sense even for non persistent vaults. Oh time, I hate time :smile:


Perhaps you forgot wthat was the topic and what I was commenting on.

So what I said was that the approach is cumbersome and impractical, and that savings compared to other available approaches would be none.

Then you steer away from that and start selling the vision. I get the vision thing, but once again the idea of savings mentioned by the topic owner is invalid, that’s all. If you plan to “get there” in 2020, why bother with Redux OS which may not even survive till then?

If we’re considering possible savings that SAFE can realize for enterprise or mid size companies or universities, then what one should really consider is practical solutions for the Windows and Apple platform.

There is certifiably zero saving in using a 1GB (if you can still buy one) USB stick on other form of flash storage to boot from the SAFE network and then deal with all the little issues that won’t be solved until 2020 vs. using a 16GB USB stick and a 6U worth of converged architecture that has the entire enterprise stack (open or closed source) and storage and makes backups to cloud.

Has anyone in the whole topic wondered just how can the enterprise that uses. SAFE audit data storage and protect customer and its own information? What about backup and DR? What happens if they lose internet connection? How does file sharing work in this system?
Of course all that and more is neglected because you’re talking about vision while at the same time complaining how I don’t offer concrete details. Funny how that works.

  • No one said organizations don’t use or shouldn’t use thin clients and terminal services. Perhaps you’re not in this space, but Citrix, Microsoft, Wyse and even various open source projects (Linux terminal server) have solved this problem a decade ago. You boot from a small decide or PXE and the rest is code and apps signed by the vendor.
  • There is no security benefit not only because it’s all already available, but also because there is zero ways (and there won’t be any for years) to perform all of the minimally required IT security duties such as backup, recovery, DR, auditing, per-file encryption, log processing and analysis, etc.

Yes, the only question is when and in what OS.
In the meantime it would be great if someone could explain how file sharing such as NFSv3, and access control (ACLs tied to single sign on) could work in such environment.

I’m confused, what are we discussing?

Distributed processing is a far-future plan for SAFE.

I’m not sure what you guys think a distributed OS is, but you’re going to need a client either way to connect to the network.

Are you guys talking about distributed calculations?

Or are you thinking that the network will process everything up to the bitmap display of your desktop, which it sends back to you.

I don’t know either for sure…

But there is no reason you couldn’t load a VM from SAFE and like David says you can be sure it is not corrupted (Because the files wouldn’t match their hashes if they where corrupted)

Cross posted from reddit

Redox is a rust based operating system. The authors have built the kernel in rust and somehow manage to run an actual GUI.

I think that this could very well be a perfect harmony with the SAFE network. The SAFE network core is completely in rust and should mesh quite nicely into Redox.

Unfortunately, I don’t know anything about developing operating systems. Perhaps my thoughts on the matter are a bit unfounded but what I am imagining is vault integration at the kernel level. The OS could be designed so that whatever computing device is being used could simply act as a “terminal” to the SAFE network supercomputer acting as the “mainframe”.

Interestingly all user permission could be tied into the SAFE identity of the user. All applications and settings could be associated with each user and called up dynamically from the safe network the same way that files are. If done correctly, there should be no need at all for each person to have a computer that only they use. Each user of the SAFE os should be able to sign in to any SAFE OS machine and have all of their settings immediately.

Cant wait until we can start testing some applications.