NT is often touted as a "very advanced" operating system. Why is that? What made NT better than Unix, if anything? And is that still the case? Over the years, I’ve repeatedly heard that Windows NT is a very advanced operating system and, being a Unix person myself, it has bothered me to not know why. I’ve been meaning to answer this question for years and I can do so now, which means I want to present you my findings.
My desire to know about NT’s internals started in 2006 when I applied to the Google Summer of Code program to develop Boost.Process. I needed such a library for ATF, but I also saw the project as a chance to learn something about the Win32 API. This journey then continued in 2020 with me choosing to join Microsoft after a long stint at Google and me buying the Windows Internals 5th edition book in 2021 (which I never fully read due to its incredible detail and length). None of these made me learn what I wanted though: the ways in which NT fundamentally differs from Unix, if at all. Then, at the end of 2023, the Showstopper book sparked this curiosity once again. And soon, a new thought came to mind: the Windows Internals 5th edition book was too obtuse but… what about the first edition? Surely it must have been easier to digest because the system was much simpler back in the early 1990s. So lo and behold, I searched for this edition, found it under the title Inside Windows NT, read it cover to cover, and took notes to evaluate NT vs. Unix. Which brings me to this article—a collection of thoughts comparing the design of NT (July 1993) against contemporary Unix systems such as 4.4BSD (June 1994) or Linux 1.0 (March 1994). Beware that, due to my background, the text is written from the point of view of a Unix “expert” and an NT “clueless”, so it focuses on describing the things that NT does differently.
Mission
Unix’s history is long—much longer than NT’s. Unix’s development started in 1969 and its primary goal was to be a convenient platform for programmers. Unix was inspired by Multics, but compared to that other system, Unix focused on simplicity which is a trait that let it triumph over Multics. Portability and multitasking were not original goals of the Unix design though: these features were retrofitted in the many “forks” and reinventions of Unix years later.
On Microsoft’s side, the first release of MS-DOS launched in August 1981 and the first release of “legacy Windows” (the DOS-based editions) launched in November 1985. While MS-DOS was a widespread success, it wasn’t until Windows 3.0 in May 1990 that Windows started to really matter. Windows NT was conceived in 1989 and saw the light with the NT 3.1 release in July 1993. This timeline gave Microsoft an edge: the design of NT started 20 years after Unix’s, and Microsoft already had a large user base thanks to MS-DOS and legacy Windows. The team at Microsoft designing NT had the hindsight of these developments, previous experience developing other operating systems, and access to more modern technology, so they could “shoot for the moon” with the creation of NT.
In particular, NT started with the following design goals as part of its mission, which are in stark contrast to Unix’s: portability, support for multiprocessing systems (SMP), and compatibility with DOS, legacy Windows, OS/2, and POSIX. These were not goals to scoff at and meant that NT started with solid design principles from the get go. In other words: these features were all present from day one and not bolted on at a later stage like they were in many Unixes.
The Kernel
Unix is, with few exceptions like Minix or GNU Hurd, implemented as a monolithic kernel that exposes a collection of system calls to interact with the facilities offered by the operating system. NT, on the other hand, is a hybrid between a monolithic kernel and a microkernel: the privileged component, known as the executive, presents itself as a collection of modular components to user-space subsystems. The user-space subsystems are special processes which “translate” the APIs that the applications consume (be it POSIX, OS/2, etc.) into executive system calls.
One important piece of the NT executive is the Hardware Abstraction Layer (HAL), a module that provides abstract primitives to access the machine’s hardware and that serves as the foundation for the rest of the kernel. This layer is the key that allows NT to run on various architectures, including i386, Alpha, and PowerPC. To put the importance of the HAL in perspective, contemporary Unixes were coupled to a specific architecture: yes, Unix-the-concept was portable because there existed many different variants for different machines, but the implementation was not.
Another important piece of the NT executive is its support for multiprocessing systems and its preemptive kernel. The kernel has various interrupt levels (SPLs in BSD terminology) to determine what can interrupt what else (e.g. a clock interrupt has higher priority than a disk interrupt) but, more importantly, the kernel threads can be preempted by other kernel threads. This is “of course” what every high-performance Unix system does today, but it’s not how many Unixes started: those systems started with a kernel that didn’t support preemption nor multiprocessing; then they added support for user-space multiprocessing; and then they added kernel preemption.
Objects
NT is an object-oriented kernel. You might think that Unix is too: after all, processes are defined by a struct and file system implementations deal with vnodes (“virtual nodes”, not to be confused with inodes which are a file system-specific implementation detail). But that’s not quite the same as what NT does: NT forces all of these different objects to have a common representation in the system. You can rightfully be skeptical about this because… how can you offer a meaningful abstraction over such disparate things as processes and file handles? You can’t, really, but NT forced all of these to inherit from a common object type and, surprisingly, this results in some nice properties:
Centralized access control: Objects are exclusively created by the object manager, which means there is a single place in the code to enforce policy. This is powerful because the semantics for, say, permission checks, can be defined in just one location and applied uniformly throughout the system.
Common identity: Objects have identities and they are all represented in a single tree. This means that there is a unique namespace for all objects, no matter if we are talking about processes, file handles, or pipes. The objects in the tree are addressable via names (paths) and different portions of the tree can be owned by different subsystems.
Unified event handling: All object types have a signaled state, whose semantics are specific to each object type. For example, a process object enters the signaled state when the process exits, and a file handle object enters the signaled state when an I/O request completes. This makes it trivial to write event-driven code (ehem, async code) in userspace, as a single wait-style system call can await for a group of objects to change their state—no matter what type they are.
Processes
Processes are a common entity in both NT and Unix but they aren’t quite the same. In Unix, processes are represented in a tree, which means that each process has a parent and a process can have zero or more children. In NT, however, there is no such relationship: processes can “inherit” resources from their creators—any type of object, basically—but they are standalone entities after they are created.
What wasn’t common back when NT was designed were threads: Mach was the first Unix-like kernel to integrate threads in 1985, which means that other Unixes adopted this concept later on and had to retrofit it into their existing designs. For example, Linux chose to represent threads as processes, each with its own PID, in its 2.0 release in June 1996; and NetBSD didn’t get threads, represented as separate entities from processes, until its 2.0 release in 2004. Contrary to Unix, NT chose to support threads from the very beginning, knowing that they were a necessity for high-performance computing on SMP machines.
Compatibility
As mentioned in the introduction, a major goal of NT was to be compatible with applications written for legacy Windows, DOS, OS/2 and POSIX. One reason for this was technical, as this forced the system to have an elegant design; the other reason was political, as NT was a joint development with IBM and NT had to support OS/2 applications even if, in the end, NT ended up being Windows.
This need for compatibility forced NT’s design to be significantly different than Unix’s. In Unix, user-space applications talk to the kernel directly via its system call interface, and this interface is the Unix interface. Oftentimes, but not always, the C library provides the glue to call the kernel and applications never issue system calls themselves—but that’s a minor detail. Contrast this to NT where applications do not talk to the executive (the kernel) directly. Instead, each application talks to one specific protected subsystem, and these subsystems are the ones that implement the APIs of the various operating systems that NT wanted to be compatible with. These subsystems are implemented as user-space servers (they are not inside the NT “microkernel”).
Virtual Memory
NT, just as Unix, relies on a Memory Management Unit (MMU) with pagination to offer protection across processes and to offer virtual memory. Paging in user-space processes is a common mechanism to give them a larger address space than the amount of physical memory on a machine. But one thing that put NT ahead of contemporary Unix systems is that the kernel itself can be paged out to disk too. Obviously not the whole kernel—if it all were pageable, you’d run into the situation where a resolving kernel page fault requires code from a file system driver that was paged out—but large portions of it are. This is not particularly interesting these days because kernels are small compared to the typical installed memory on a machine, but it certainly made a big difference in the past where every byte was precious.
I/O Subsystem
Early versions of Unix only supported one file system. For example, it wasn’t until 4.3BSD in 1990 that the BSDs gained the Virtual File System (VFS) abstraction to support more than just UFS. NT, on the other hand, started with a design that allowed multiple file systems. In order to support multiple file systems, the kernel has to expose their namespaces in some way. Unix combines the file systems under a single file hierarchy via mount points: the VFS layer provides the mechanisms to identify which nodes correspond to the root of a file system and redirects requests to those file system drivers when traversing a path.
NT has a similar design even if, from the standard user interface, file systems appear as disjoint drives: internally, the executive represents file systems as objects in the object tree, and each object is responsible for parsing the remainder of a path. Those file system objects are remapped as DOS drives so that userspace can access them. And, guess what? The DOS drives are also objects under a separate subtree that redirects I/O to the file systems they reference.
In file system terms, NT ended up shipping with NTFS. NTFS was a really advanced file system for its time even if we like to bash on it for its poor performance (a misguided claim). The I/O subsystem of NT, in combination with NTFS, brought 64-bit addressing, journaling, and even Unicode file names. Linux didn’t get 64-bit file support until the late 1990s and didn’t get journaling until ext3 launched in 2001. Soft updates, an alternate fault tolerance mechanism, didn’t appear in FreeBSD until 1998. And Unix represents filenames as nul-terminated byte arrays, not Unicode.
Networking
The Internet is everywhere today, but when NT was designed, that was not the case. Looking back at the Microsoft ecosystem, DOS 3.1 (1987) included the foundations for file sharing in the FAT file system, yet the “OS” itself did not provide any networking features: a separate product called Microsoft Networks (MS-NET) did. Windows 3.0 (1990) included support for NetBIOS, which allowed primitive printer and file sharing on local networks, but support for TCP/IP was nowhere to be seen. In contrast, Unix was the Internet: all foundational Internet protocols were written for and with it.
During the design of NT, it was therefore critical to account for good network support, and indeed NT did launch with networking features. As a result, NT did support both Internet protocols and the traditional LAN protocols used in pre-existing Microsoft environments, which put it ahead of Unix in corporate environments.
User-Space
We are getting close to the end, I promise. There are just a few user-space topics to briefly touch on:
Configuration: NT centralized system and application configuration under a database known as the registry, freeing itself from the old CONFIG.SYS, AUTOEXEC.BAT and the myriad INI files that legacy Windows used. This made some people very angry, but in the end, a unified configuration interface is beneficial to everyone: applications are easier to write because there is a single foundation to support, and users have an easier time tuning their system because there is just one place to look at.
Internationalization: Microsoft, being the large company that was already shipping Windows 3.x across the world, understood that localization was important and made NT support such feature from the very beginning. Contrast this to Unix where UTF support didn’t start to show up until the late 1990s, and supporting different languages came via the optional gettext add-on.
The C language: One thing Unix systems like FreeBSD and NetBSD have fantasized about for a while is coming up with their own dialect of C to implement the kernel in a safer manner. This has never gone anywhere except, maybe, for Linux relying on GCC-only extensions. Microsoft, on the other hand, had the privilege of owning a C compiler, so they did do this with NT, which is written in Microsoft C. As an example, NT relies on Structured Exception Handling (SEH), a feature that adds try/except clauses to handle software and hardware exceptions. I wouldn’t say this is a big plus, but it’s indeed a difference.
Conclusion
NT was groundbreaking technology when it launched. As I presented above, many of the features we take for granted today in systems design were present in NT since its inception, whereas almost all other Unix systems had to gain those features slowly over time. As a result, such features don’t always integrate seamlessly with Unix philosophies. Today, however, it’s not clear to me that NT is truly “more advanced” than, say, Linux or FreeBSD. It is true that NT had more solid design principles at the onset and more features that its contemporary operating systems, but nowadays… the differences are blurry. Yes, NT is advanced, but not significantly more so than modern Unixes. What I find disappointing is that, even though NT has all these solid design principles in place… bloat in the UI doesn’t let the design shine through. The sluggishness of the OS even on super-powerful machines is painful to witness and might even lead to the demise of this OS.
I’ll leave you with the books used to write this article in case you want to go through my learning journey. I had to skip over tons of interesting details, as you can imagine, so these are worth a read:
Inside Windows NT, 1st edition.
The Design and Implementation of the BSD 4.4 operating system.
Windows Internals, part 1, 7th edition.
Windows Internals, part 2, 7th edition.
The Design and Implementation of the FreeBSD operating system, 2nd edition.
Thanks for making it this far. Don’t forget to subscribe to Blog System/5 to receive new posts and support my work!
Remember these 3 key ideas for your startup:
Embrace Portability and Compatibility: NT's design goals included portability and compatibility with multiple systems from the start. For startups, ensuring your product is adaptable and compatible with various platforms can significantly enhance its market reach and longevity. For more insights, check out how to determine realistic goals for a project.
Centralized Configuration Management: NT’s centralized registry for system and application configuration simplifies management and enhances consistency. Implementing a unified configuration system in your startup can streamline operations and reduce complexity. Learn more about how to create an effective document management workflow.
Advanced Security Design: NT's capability-based security model was ahead of its time. Prioritize robust security frameworks in your startup to protect user data and maintain trust.
Edworking is the best and smartest decision for SMEs and startups to be more productive. Edworking is a FREE superapp of productivity that includes all you need for work powered by AI in the same superapp, connecting Task Management, Docs, Chat, Videocall, and File Management. Save money today by not paying for Slack, Trello, Dropbox, Zoom, and Notion.
For more details, see the original source.