Kahibaro
Discord Login Register

1.1.2 History of Unix and Linux

Early Days of Time-Sharing and Multics

In the 1960s, computers were rare, expensive machines that filled entire rooms. People interacted with them in batches, by submitting stacks of punched cards and waiting for results. This process was slow and inconvenient. Researchers began to explore time-sharing systems, where many users could interact with a computer at the same time, each with a terminal.

One of the most ambitious projects of that era was Multics, the Multiplexed Information and Computing Service. It was a joint project involving MIT, Bell Labs, and General Electric. Multics aimed to be a powerful, secure, multiuser operating system with advanced features such as hierarchical file systems and dynamic linking. It was also complex and difficult to implement.

Engineers at Bell Labs, including Ken Thompson and Dennis Ritchie, became dissatisfied with the slow progress and complexity of Multics. Bell Labs eventually withdrew from the project. Thompson, however, still wanted a simple, interactive operating system that scientists and programmers could use effectively.

Birth of Unix at Bell Labs

After Bell Labs left the Multics project, Ken Thompson began to design a smaller, simpler system inspired by some of Multics concepts, but with very different goals. He initially wrote a new operating system for a small DEC PDP-7 minicomputer that was available inside Bell Labs. This system soon gained the name Unix.

Unix embraced a philosophy of simplicity and modularity. Instead of one large, complex program that tried to do everything, Unix encouraged small, focused tools that each did one job well and could be combined. The file system was hierarchical, with a single root and directories beneath it, and devices were represented as files. This design made the system easier to program and easier to extend.

Dennis Ritchie contributed a key innovation. He developed the C programming language and then helped rewrite most of the Unix operating system in C. At that time, operating systems were usually written in low-level assembly language tailored to one specific type of hardware. By writing Unix in C, the system became much more portable. It could be adapted to run on different hardware with far less effort. This decision had long term consequences for the spread of Unix like systems.

Unix quickly grew inside Bell Labs and then at universities that received it under license. Its design attracted researchers, teachers, and students. The combination of a powerful command line, a consistent file model, and a portable implementation made it ideal as a teaching and research platform.

The Unix Philosophy and Culture

Over time, people began to articulate an informal Unix philosophy. Although it was never a strict standard, some ideas were widely shared in the Unix community.

One idea was to write programs that do one thing and do it well. Another was to write programs that work together by reading from standard input and writing to standard output. The shell could connect these programs into pipelines to perform complex tasks by composing simple tools.

Unix developers also valued text as a universal interface. Many configuration files, logs, and communication channels were plain text, which made them easy to inspect and manipulate with standard tools. This approach influenced not only Unix and its descendants, but software design in general.

Unix culture also favored sharing ideas and source code, particularly in academic environments. Students and researchers often experimented with the system and wrote their own tools, which in turn influenced future designs.

Commercialization and Fragmentation of Unix

As Unix became more popular, it attracted commercial attention. AT&T, which owned Bell Labs, began to license Unix commercially once regulations changed and the company was allowed to enter the computer market. Different organizations obtained Unix source code and developed their own versions.

Universities, especially the University of California at Berkeley, played a major role in this process. Berkeley’s Computer Systems Research Group produced the Berkeley Software Distribution, or BSD, which started as a set of enhancements to AT&T Unix and evolved into its own line of Unix like systems. BSD introduced many important features, including the TCP/IP networking stack that helped build the early Internet.

Meanwhile, commercial vendors created their own proprietary Unix variants. Examples included AT&T’s System V, Sun Microsystems’ SunOS and later Solaris, IBM’s AIX, Hewlett Packard’s HP UX, and others. Each vendor added features, tools, and sometimes new interfaces.

This proliferation led to fragmentation. Different Unix systems were similar at a high level, but they were not always compatible. System calls, utilities, and administrative tools could differ, which complicated software development and deployment. Standards efforts, such as POSIX, tried to define a common set of interfaces for Unix like systems so that programs could be portable.

In the commercial world, Unix systems often required expensive hardware and per user or per CPU licenses. While Unix remained popular in universities and data centers, the combination of licensing costs, proprietary extensions, and portability issues left an opening for a free, Unix like system that anyone could use and modify.

Free Software and the GNU Project

In the 1980s, concerns about software freedom began to grow, especially in academic and hacker communities. Richard Stallman, a programmer at MIT, saw a shift from a culture where source code was naturally shared to a world where code was proprietary, restricted, and closed.

Stallman founded the Free Software Foundation and launched the GNU project. GNU is a recursive acronym for “GNU’s Not Unix.” The goal was to create a complete, free software operating system that was compatible with Unix at the interface level, but whose code could be studied, modified, and shared by anyone.

The GNU project created many essential components: a C compiler (GCC), core utilities like ls and cp, the GNU C library (glibc), the bash shell, editors, and more. These tools were released under the GNU General Public License, or GPL. The GPL is a copyleft license that allows anyone to use, modify, and distribute the software, but requires that derivative works remain free and that source code be available.

By the late 1980s and early 1990s, GNU had produced most of the pieces needed for a Unix like operating system user space. However, one crucial component was missing, a working kernel. There were plans for a GNU kernel called Hurd, but it was not ready for widespread use.

This situation created an interesting gap. There was a nearly complete free Unix like environment in user space, but no practical free kernel to run it on.

Early Free Unix-like Efforts Before Linux

GNU was not the only project working toward a free Unix like system. In the late 1980s and early 1990s, there were several efforts to create or release free or low cost Unix compatible software.

One important development involved BSD. Since BSD had originated from AT&T Unix code, there were legal questions about which parts could be freely distributed. Over time, projects such as 386BSD and later FreeBSD, NetBSD, and OpenBSD emerged, aiming to provide freely redistributable BSD based systems that removed proprietary AT&T code.

These BSD systems were technically strong and influential, but the legal uncertainties in the early years slowed their adoption. For someone searching for a clearly free, Unix like kernel in the very early 1990s, the landscape was complicated.

There were also non Unix attempts at free operating systems, as well as academic microkernel projects, but none had the combination of practicality, Unix compatibility, and licensing clarity that many programmers were seeking.

It was within this context that a Finnish student began a hobby project that would eventually become the Linux kernel.

Linus Torvalds and the First Linux Kernel

In 1991, Linus Torvalds, a computer science student at the University of Helsinki, started working on a small operating system kernel for the Intel 80386 processor. He initially wanted something better than the teaching operating system he had been using and was also inspired by Minix, a small Unix like system created by Andrew Tanenbaum for educational purposes.

Minix was useful as a learning tool, but it had licensing restrictions and technical limitations. Torvalds wanted something more capable and more open. He began by creating a simple kernel that could run on his personal computer and interact with the hardware directly.

In August 1991, Torvalds announced his work on the comp.os.minix newsgroup and invited others to try it. At first, he called it a “hobby, nothing big and professional,” but the project quickly attracted interest. People around the world downloaded the code, experimented with it, and contributed patches.

Torvalds chose to release the kernel source code under a free license, and soon after, it was placed under the GNU GPL. This decision meant that Linux would remain free software in the sense defined by the Free Software Foundation. Anyone could study and modify it, but improvements had to stay free as well.

The term “Linux” came from combining “Linus” with “Unix.” Initially, Torvalds used a different name for the project, but “Linux” became the widely accepted name for the kernel.

Combining Linux with GNU: A Complete System

The Linux kernel by itself could not form a complete operating system. It needed compilers, libraries, shells, and utilities. This is where the GNU project’s earlier work became crucial.

Programmers began combining the Linux kernel with the GNU user space tools, libraries, and utilities. The result was a complete, fully functional, Unix like operating system that was free to use and modify. Even though the kernel and user space came from different projects, they fit together because both aimed to be Unix compatible.

This combination is what most people informally call “Linux” today. In a more precise sense, the kernel is Linux and the broader system is a distribution of the GNU plus Linux stack and other components.

The rapid availability of a complete, free Unix like system created excitement among developers, students, and hobbyists. People could now run a powerful multiuser, multitasking operating system on inexpensive hardware, such as standard PCs, without paying for proprietary licenses.

Because the system was open, bugs could be found and fixed quickly, and new hardware could be supported by adding drivers directly into the kernel source. This collaborative development model would become one of Linux’s defining strengths.

The Rise of Linux Distributions

As Linux grew in features and popularity, the number of components involved also grew. Installing and configuring the kernel, system libraries, utilities, and applications from source became complex.

To make things easier, individuals and organizations began to assemble these pieces into coherent, prepackaged systems called distributions. A distribution typically included the Linux kernel, GNU tools, an installer, a package manager, documentation, and collections of applications configured to work together.

Early examples of distributions included Slackware, one of the first widely used Linux distributions, and others like SLS, which predated it and inspired further work. Each distribution had its own goals. Some aimed for simplicity and minimalism, others for user friendliness or specialized use cases.

As Linux matured, more distributions appeared: Debian, which emphasized community governance and free software principles; Red Hat Linux, which later split into Red Hat Enterprise Linux and the community based Fedora; and many more. These distributions provided different ways to install and maintain a Linux system, but they all relied on the same fundamental kernel and core tools.

Distributions helped Linux reach users who were not necessarily programmers. With graphical installers, package management, and sensible defaults, Linux began to appeal to a wider audience.

Linux, Servers, and the Growth of the Internet

In the 1990s and early 2000s, the Internet expanded rapidly. Organizations needed reliable, flexible, and cost effective servers to host websites, email, databases, and other services. Proprietary Unix systems were powerful but often expensive and tied to specific vendor hardware.

Linux offered a compelling alternative. It ran on commodity hardware, supported networking protocols needed for Internet services, and could be customized to specific tasks. Web servers like Apache, running on Linux, powered a growing share of the world’s websites.

The open development model of Linux made it attractive to Internet infrastructure companies. Companies could use Linux internally, contribute improvements, and benefit from community work. This model reduced vendor lock in and licensing costs.

As Linux proved itself reliable and secure enough for serious workloads, major companies began to adopt it. Linux became common in data centers, web hosting, and eventually in large scale environments such as supercomputers and cloud computing infrastructures.

The combination of Linux and other open source software played a crucial role in building much of the modern Internet. Many core services and platforms, from web hosting to DNS servers to mail systems, depended on Linux or other Unix like systems.

Linux on the Desktop and in Everyday Devices

While Linux established a strong presence on servers and in data centers, its path on the desktop was more gradual. Early desktop environments were less polished than proprietary systems, and hardware support, especially for consumer devices, could be uneven.

Over time, desktop environments such as GNOME, KDE Plasma, and others matured. Distributions began to focus on user friendly features, graphical installers, and preconfigured systems suitable for general desktop use. Projects worked on better hardware detection and drivers, and manufacturers increasingly provided specifications or direct support for Linux.

At the same time, Linux expanded beyond traditional desktop and server roles into embedded systems. An embedded system is a specialized computer built into a device such as a router, smart TV, or industrial controller. Linux’s modularity and ability to be customized made it a natural fit for such devices.

Perhaps the most visible example is Android. Although Android uses its own user space and application framework, it is built on top of the Linux kernel. This means that hundreds of millions of smartphones and tablets run a Linux based kernel every day. Other consumer devices, such as streaming boxes, home routers, and even some appliances, also rely on Linux internally.

Linux’s presence in these areas shows how an operating system originally written for hobby purposes on a personal computer grew into a foundational technology in everyday life.

Corporate Involvement and Open Source Collaboration

As Linux and other open source software became critical infrastructure, large technology companies began to participate in their development. Companies such as IBM, Red Hat, Intel, Google, and many others started contributing code, funding developers, and supporting open source communities.

The Linux kernel became one of the largest collaborative software projects in history. Developers from around the world, many employed by different companies that might compete in other areas, worked together on a shared code base. Contributions were reviewed and integrated through a structured process with maintainers overseeing different subsystems.

The open source development model proved effective for complex systems software. Bugs and security issues could be caught by many eyes, and improvements in performance and hardware support were shared widely. This model also encouraged transparent engineering practices, with mailing lists, public repositories, and documented decision making.

Organizations formed around open source, such as the Linux Foundation, helped coordinate efforts, provide legal and financial frameworks, and promote best practices. Corporate involvement did not replace community contributions, but it added resources and stability that supported long term development.

This blend of volunteer and corporate contributions is a hallmark of modern Linux development and has influenced many other projects that follow similar collaborative models.

Key Milestones Shaping Modern Linux

Over the decades, several milestones have shaped how Linux is used and perceived. While a full timeline would be long, some events stand out for their impact.

The introduction of coherent package management systems made it much easier to install and update software. Projects like Debian’s APT and Red Hat’s RPM based tools allowed users to manage thousands of packages consistently. This helped distributions scale and simplified system administration.

The rise of virtualization and later containers placed Linux at the center of modern infrastructure. With technologies like KVM for full virtualization and namespaces and cgroups for containers, Linux became the foundation for platforms such as Docker and Kubernetes. This allowed applications to be isolated, portable, and easier to deploy at scale.

Supercomputers increasingly adopted Linux as well. Over time, Linux became the dominant operating system in the TOP500 list of the world’s fastest supercomputers. Its flexibility and customizability allowed researchers to tune systems for high performance computing.

On the desktop, more polished distributions and interfaces made Linux a viable alternative for developers, students, and general users. Projects focusing on user experience, hardware compatibility, and application ecosystems helped broaden Linux’s reach beyond experts.

Each of these milestones reflects a broader trend: Linux adapted to new use cases and technologies without losing its core strengths of openness, modularity, and community led development.

The Ongoing Evolution of Unix and Linux

Today, Unix as a trademark is controlled by The Open Group. Several operating systems are officially certified as Unix, including some BSD based systems and proprietary Unix variants. These systems remain important in some enterprise and specialized environments.

Linux, although not certified as Unix in the trademark sense, is considered a Unix like system. It follows many of the same design principles and interfaces that originated with Unix, but it is a distinct kernel created independently. The term “Unix like” reflects this relationship.

The BSD family, including FreeBSD, OpenBSD, and NetBSD, continues to evolve alongside Linux. These systems share many historical and conceptual roots with Unix and contribute to the broader ecosystem of free Unix like operating systems.

Linux itself continues to change. The kernel is updated regularly, with new versions adding hardware support, performance improvements, and new features. User space components, distributions, and desktop environments also evolve. Projects like systemd, new filesystems, and advancements in security features illustrate how Linux development responds to new requirements.

Throughout this evolution, a few themes have remained consistent. There is a focus on multiuser, multitasking capabilities, an emphasis on composable tools and text based interfaces, and a commitment to open collaboration. These traits link modern Linux systems back to the early experiments at Bell Labs and the ideals of free software advocates.

Linux began as a hobby project that combined ideas from Unix, free software, and academic operating systems. Over time, it grew into a central part of modern computing infrastructure, running on devices from tiny embedded boards to the world’s largest supercomputers. The story of Unix and Linux is one of ideas traveling across decades, transformed and extended by many different hands, yet still recognizable in the systems people use today.

Views: 8

Comments

Please login to add a comment.

Don't have an account? Register now!