The future of operating systems

Strong typing and accessibility

The POSIX standard is mainly composed of two parts: C APIs, and the shell.

Configuring a system is the same as programming

Imagine a program named my_server. When it is launched, my_server reads the configuration file named /etc/my_server.conf, parses it as some sort of JSON-like syntax, and tries to find the values of the fields named ip_address and port. If the file is missing, the parsing fails, the fields are missing, the values malformed, or the port occupied, the program stops with an error code. my_server isn’t particularly atypical. Many programs in the Unix world work like that.

Content addressing and public-key infrastructure

One of the major ongoing trends in computer science today is the shift from trusted to untrusted content and infrastructure. I invite you to look at the corresponding Wikipedia article if you aren’t familiar with content-based addressing.

The solution

The principle of content-based addressing is to identify a program by the hash of its content. The idea is as follows: the user asks the operating system to execute the program AQG7hwuCBBtHUkaHerKqWTGTmUeYreZMrQCBtq9YV34Z (in this example, the base58 encoding of a 32 bytes hash), and the system is then responsible for finding the program whose hash matches this value, most likely through a DHT, then execute it.

Cross-platform binaries

Using content-based addressing has, however, a problem: a binary running natively on x86 can’t be the same as a binary running natively on ARM (unless you merge the code of every single platform that exists into the same file, which I’m going to assume is not practical). As they aren’t the same file, they also don’t have the same hash.

Programs rarely end

Unix is based upon the idea of small programs doing small tasks. While this is true for utilities such as grep, sed, ls, etc., and used to be true for example for the Apache web server spawning one process per request, it simply isn’t true anymore when you’re not a user actively using a shell.


I can’t help it but see the rise of Docker containers, and virtualization in general, as a consequence of the failure of the existing operating systems to properly isolate programs.

What is a file system anyway?

It is legitimate to ask the question: why do we even need the concept of files? After all, files are an abstraction over data found in hard-disk drives, but there could exist other ways to abstract over them.

Predictable environments

I want to briefly mention that in addition to security, the other selling point of docker is being able to execute programs in a predictable environment. A docker image embeds all the programs and shared libraries necessary for the desired program to function properly.


All the design proposals presented in this article are fully or partially implemented in my redshirt operating system project.

  • Monolithic vs micro-kernel vs your-favourite-denomination-kernel. This is one of the ultimate nerd debates, and I don’t think that the answer to this question is actually very important.
  • What about the GPL? The magic of content-based addressing makes it harder to maintain an open-source-only system. There might be solutions to that, but it’s off-topic.
  • Precisely which environment is offered to programs, and how to make programs talk to the kernel or between each others. The point of this article isn’t to decide which system calls are available, or whether to pass messages through shared memory, but to set the general intention that the overall architecture should be simple, and misuse should be difficult or impossible.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store