The future of operating systems

tomaka
12 min readFeb 11, 2021

--

Linux was first started in 1991, 29-and-a-half years ago. The POSIX standard, which Linux implements, was started in 1988, 33 years ago. Both of these technologies have seen very little evolution in the core of their design over time, despite all the progresses made in software engineering and computer science.

In this article, I will examine, topic by topic, why all the current popular operating system are in my opinion obsolete, and what to replace them with.

Disclaimer: while I know a thing or two about computers, I am in no way an expert in operating systems. I’ve never been interested in the exploration of a wide range of operating systems. Their design feels wrong to me, for the reasons exposed in this article. There will probably be unintended exaggeration and falsities, and I apologize in advance for them.

Strong typing and accessibility

The POSIX standard is mainly composed of two parts: C APIs, and the shell.

It might be hard to realize in 2021, while navigating a website with your browser running on top of a desktop environment, but Unix systems are extremely tied to the C programming language.

Still present today, the /usr/lib and /usr/include paths contain C-compatible libraries and C/C++ headers. Your favourite package manager is most likely capable of installing additional libraries and headers in these paths. It is not exaggerated to say that the package manager of the C and C++ languages is the Unix/Linux operating system itself. All packages are under one global namespace and are installed almost at the root of your file system.

Both the C programming language and the shell have a rather weak type system. In the C language, functions exchange integers and memory pointers. Some of these integers are file descriptors, and since everything is a file, file descriptors can represent anything and there is no guarantee that reading, writing or calling ioctl will succeed. For the shell, program names refer to files in the file system, and exchange bytes on stdin/stdout. For these reasons, writing a C program that compiles offers little guarantee that it runs as expected. Writing a shell command (without testing it) offers little guarantee that it runs as expected, as many people writing CI scripts can probably attest.

In the same vein, a lot of programs in the Unix world start by reading configuration files, sometimes at the initialization of your system (such as GRUB or your X11/Wayland server). It is often difficult if not impossible to verify ahead of time that a configuration file has been correctly written. Just like for C and the shell, writing a configuration file without trying it offers little guarantee that it actually works.

This world, built upon weak typing and interfacing, is to me the reason why software engineering feels so brittle today. Despite being reasonably computer-literate, when I need to fix a problem on my Linux system (which, I must say, I tweak as little as possible, specially to avoid problems) I often see myself blindly following instructions from random people on the Internet asking me to run commands or to modify configuration files in ways I don’t understand.

The typical response to this coming from many members of the community is the very frustrating “you should have read the documentation”, insinuating that it’s your fault for not spending huge amounts of time getting familiar with every single little arbitrary detail of every single program running on your machine. As if having a machine that runs was an end goal by itself, and not just a tool that should never get in the way, like good tools do.

While slightly off-topic, I can’t help but see here a similarity with people who argue that the C programming language is suitable for writing safety-critical programs, and that the dozens of critical buffer overflow exploits that are discovered every year in C/C++ programs are the fault of the engineers who wrote the code rather than a systemic issue.

The extreme complexity of a Linux system, combined with how easy it is to screw things up, makes hacking (as in tinkering) extremely inaccessible for most people. According to the open source philosophy, users should be in full control of their machine, and I believe that simplicity and accessibility are critical to achieve this goal. What difference really is there between a closed-source software, and an open-source software whose source code is too complicated to understand? Who are we to say that you shouldn’t start hacking your system without a decade of experience in system programming?

To me, a modern operating system must be accessible. Anyone should be able to make their machine behave the way they wish. This is achieved in two ways: making the overall system simple to understand, and preventing and reducing the negative consequences of mistakes. Inexperienced hackers shouldn’t live in this constant fear that making an accidental mistake will prevent their operating system from starting.

Configuring a system is the same as programming

Imagine a program named my_server. When it is launched, my_server reads the configuration file named /etc/my_server.conf, parses it as some sort of JSON-like syntax, and tries to find the values of the fields named ip_address and port. If the file is missing, the parsing fails, the fields are missing, the values malformed, or the port occupied, the program stops with an error code. my_server isn’t particularly atypical. Many programs in the Unix world work like that.

Now imagine writing a program in a modern programming language. You import a third-party library named my_server, then call my_server::start, passing to the function the two strongly-typed parameters ip_address (as an uint8_t[4]) and port (as an uint16_t). If the port is already occupied, the server returns a strongly-typed enum or throw an exception.

What is the difference between these two situations? In terms of logic, the same is achieved: my_server runs. In terms of robustness, however, it is day and night. In the second solution, the possibilities of failure are considerably reduced, and the validity of the program can be checked ahead of time. It is not possible to even try to start my_server without passing its parameters properly.

What I would like to highlight is not that we should replace all configuration files with function calls, but that configuring a system is similar to writing a program. Except that in the Linux world right now it is done with one of the worst programming languages that you can imagine: a spaghetti of files, weakly typed, and full of implicit magic. Programming languages in general have evolved over time to become easier to use and to offer more guarantees about their correctness. None of these evolutions apply here.

While I took the example of a function call, I believe that declarative programming is the most appropriate tool for configuring a system. More concretely, I believe that a system should be described by a single declarative-style program, similar to what NixOS is for example doing. NixOS, however, doesn’t go far enough. The validity of this system-wide configuration should be fully verifiable ahead of time. To take one of the slogans of Haskell: “if it compiles, it works”. It should simply be impossible to setup your system in a way that can malfunction.

Programs should stop relying on configuration files altogether, and more generally on the presence of some files on the file system. Relying on this shared global state that the file system is, is against mathematical purity. More on that later. Instead, all the parameters and configuration that a program needs to run should be passed as input. In order to permit ahead-of-time verification, programs must provide metadata that makes it possible to verify that the input is potentially correct. Launching a program should offer the same guarantees as calling a function in a strongly-typed programming language has.

Content addressing and public-key infrastructure

One of the major ongoing trends in computer science today is the shift from trusted to untrusted content and infrastructure. I invite you to look at the corresponding Wikipedia article if you aren’t familiar with content-based addressing.

A considerable amount of efforts is being spent right now by distribution maintainers running build servers, tracking updates, and hosting binaries that end users can then download. This infrastructure serves one objective: providing a source of trust. When you run your_favourite_package_manager install, you trust the distribution maintainers on the fact that the files that you download aren’t malicious and will not break your system.

This trust is necessary because of the fact that running a malicious binary can easily break your system and/or steal your secrets. More on that later. Unfortunately, it comes at a cost: as a software developer, integrating a program into a package manager involves a lot of bureaucracy.

I personally believe that this situation has strongly contributed over time to the growth of the web as a platform. On the web, one can update their website as often as possible, and it is as easy for users across the world to access it, as it is to run your_favourite_package_manager install, except without the necessity of a preliminary approval by distribution maintainers.

The solution

The principle of content-based addressing is to identify a program by the hash of its content. The idea is as follows: the user asks the operating system to execute the program AQG7hwuCBBtHUkaHerKqWTGTmUeYreZMrQCBtq9YV34Z (in this example, the base58 encoding of a 32 bytes hash), and the system is then responsible for finding the program whose hash matches this value, most likely through a DHT, then execute it.

From the point of view of a developer, distributing your work instantly becomes very easy, easier than deploying a website: connect to the DHT, and publish your program on it. Anyone in the world can then execute it by knowing its hash.

This idea relies on the fact that it should be safe to execute any program on your machine, without the risk of the program compromising it. Web browsers do that (or at least try as hard as possible), so why could operating system not?

This hash-based system, however, isn’t very convenient. No software company is going to run an advertisement saying “Try our new product, AQG7hwuCBBtHUkaHerKqWTGTmUeYreZMrQCBtq9YV34Z!”. For better ease-for-use, to guarantee a minimum level of quality in a software, and to avoid phishing attacks, a source of trust would still exist to tie recognizable names to program hashes.

Similarly, the hash alone also isn’t enough for programs that need access to critical parts of the system to function, such as graphics drivers. Since it isn’t possible to be resilient to malicious programs of this category, a source of trust is needed here as well. This source of trust could be the same as right now, through the curation offered by distribution maintainers.

Cross-platform binaries

Using content-based addressing has, however, a problem: a binary running natively on x86 can’t be the same as a binary running natively on ARM (unless you merge the code of every single platform that exists into the same file, which I’m going to assume is not practical). As they aren’t the same file, they also don’t have the same hash.

While this isn’t a fundamental problem, I’d like to make this more convenient by revisiting the old idea of write once, run everywhere. The same binary, expressed in an intermediary language, could be run on any platform through an interpreter or a JIT compiler.

Rather than Java, a better and more recent candidate for this intermediary language is WebAssembly. Initially conceived for the web, it was designed with the same characteristics that we want in mind: it is architecture-agnostic, operating-system-agnostic, and makes zero assumptions about its environment. It makes so little assumption about its environment that it doesn’t even assume that it is running within a web page.

The Wasi interface, initiated by Mozilla, is exactly this: an extension of WebAssembly based upon the idea of running WebAssembly programs outside of a browser. My personal feeling on Wasi is that it copies POSIX too much, which has a lot of advantages when it comes to integrating it as a target for existing programming languages, but disqualifies it for the title of “truly-innovative environment”.

The famous talk The Birth & Death of JavaScript also showcases an important point: when running non-native programs, operating systems can more easily sandbox them (no memory paging is needed), to the point that it is reasonable to run user-mode programs in ring 0. While it is unclear how much, this presumably saves up a lot of CPU cycles consumed by system calls.

Programs rarely end

Unix is based upon the idea of small programs doing small tasks. While this is true for utilities such as grep, sed, ls, etc., and used to be true for example for the Apache web server spawning one process per request, it simply isn’t true anymore when you’re not a user actively using a shell.

Personally, most of the time, my system runs Gnome, Firefox, Visual Studio Code, and a lot of background daemons. Apart from the Rust compiler and Cargo (the Rust package manager), there isn’t any program on my system that starts and stops frequently, and when they do, it is primarily for sandboxing purposes (see previous section).

According to the Unix philosophy, firefox is supposed to execute ls every time it wants to know which files are present in a directory. It turns out that, when it wants to know which files are present in a directory, Firefox prefers to execute a strongly-typed C++ function.

It is safe to say that a modern operating system, such as one we’re trying to design in this article, would consist in long-running daemons, potentially talking to each other, rather than programs that run one small task then exit.

The Unix philosophy according to which programs must do one thing and must do it well is still, in my opinion, a very good advice. You just have to replace the word “programs” with “modules in the context of a programming language”.

Sandboxing

I can’t help it but see the rise of Docker containers, and virtualization in general, as a consequence of the failure of the existing operating systems to properly isolate programs.

Docker lets you execute programs in an isolated environment, to ensure that said programs can’t take control of the system or interfere in a negative way with the other programs running on the same machine. The question I’d like to ask is: why isn’t this the case by default?

Unix traditionally attributes privileges according to the user running the program. A user can have the authorization to access some files but not others, or can read some files but not write them.

SELinux improves this, and while it is now widely adopted, it is in my opinion not enough. I would personally expect to be able to download a binary using my web browser and naively run it without running the risk of the content of my /home/`whoami` being deleted.

Probably the biggest and most problematic area when it comes to isolating programs is the file system. While it is a very radical solution, I believe that programs should completely stop sharing access to a single global file system. This solution would be very impractical in a Unix system made of programs that start and stop very often, but works if you consider that a system made of long-running daemons as described in the previous section.

What is a file system anyway?

It is legitimate to ask the question: why do we even need the concept of files? After all, files are an abstraction over data found in hard-disk drives, but there could exist other ways to abstract over them.

In our prototypal operating system, the equivalent of the content of /usr/bin, /usr/share, /usr/lib, etc. would all be accessible via content-based addressing, by knowing just their hash. These files would still be present somewhere on your disk, but only the part of the operating system responsible for loading content-addressed data needs to know where.

Apart from this, what remains is the content of /home: your family pictures, whether your IDE defaults to spaces or tabs, save games of your video games, and so on. All these elements are files that users might want to share between their machines.

In the name of simplicity, I believe that it should be possible for the owner of a system to look at any file on their disk and be able to say what its purpose is.

While I don’t have a precise idea of how to redefine the concept of a file system, I do see the existing hierarchies in both Unix and Windows as obsolete concepts, in particular in the light of cloud computing and content-based addressing.

Predictable environments

I want to briefly mention that in addition to security, the other selling point of docker is being able to execute programs in a predictable environment. A docker image embeds all the programs and shared libraries necessary for the desired program to function properly.

With all the techniques already exposed earlier in this article (namely content-based addressing, and configuration passed entirely through CLI), this use-case completely vanishes as the environment is always identical on all machines.

Conclusion

All the design proposals presented in this article are fully or partially implemented in my redshirt operating system project.

Please note, however, that redshirt is a personal side-project. I am writing this article in order to present how I think an operating system should be, rather than as a way to promote redshirt.

Here are a few things that I didn’t cover:

  • Monolithic vs micro-kernel vs your-favourite-denomination-kernel. This is one of the ultimate nerd debates, and I don’t think that the answer to this question is actually very important.
  • What about the GPL? The magic of content-based addressing makes it harder to maintain an open-source-only system. There might be solutions to that, but it’s off-topic.
  • Precisely which environment is offered to programs, and how to make programs talk to the kernel or between each others. The point of this article isn’t to decide which system calls are available, or whether to pass messages through shared memory, but to set the general intention that the overall architecture should be simple, and misuse should be difficult or impossible.

A last, maybe obvious, but important point to note is that I have written this article by assuming that the cost of rewriting everything from scratch would be zero. This is obviously not the case, and consequently it is in practice probably preferable to maintain some degree of backwards compatibility.

If anything, I hope that this article has given ideas or inspiration to some readers. Maybe, one day, we will live in a wonderful world where computers work. Thanks for reading.

--

--

Responses (15)