Linux System Architecture
The course is part of this learning path
The second course in this Linux certification series (the first was a series introduction, and the third will focus on boot and packagae management) focuses on System Architecture. It explores how Linux works within its hardware environment and how you can use Linux tools to optimize your system for your specific needs.
You'll learn how to identify and manage hardware peripherals and how the Linux boot process and runlevels work and how you can control them.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
About the Author
David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.
Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.
Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.
His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.
If you want to really understand how an operating system works, you'll need to have a good sense of how it connects, tracks, and controls its hardware environment. Linux, when it boots, or experiences some change to its host hardware profile, will maintain a system of virtual - or pseudo - files describing the devices and drivers it can see. By "virtual files" I mean regular text files that are saved to volatile memory rather than to a disk drive. Their contents will be lost whenever the system is shut down, but that's no big deal: because the boot process will create appropriately updated versions when it starts up the next time.
Linux virtual filesystems and hardware peripherals
The Linux kernel writes hardware and driver data to virtual files in two separate directory hierarchies. Everything under /proc (for "process") is part of the sysctl system, and the /sys directory contains the sysfs - fs, by the way, stands for filesystem. What's the difference between the two? Apparently sysfs - being a bit more recent in design - is built using a more sophisticated structure. So, for instance, many files under /sys are actually nothing more than symbolic links (symlinks) pointing to the devices themselves. We'll learn more about symlinks and how they work in a later video.
Let's take a quick look at the /sys directory tree. Most of the files that will be of interest to us live beneath /sys/class where you'll find all devices organized by type (or, class).
So if we drill down through /printer/lp0 - which is the designation Linux has given my Brother laser printer - and then down further through /subsystem /lp0 and /device, we'll be able to take a look at details of my printer configuration by reading - using cat - the id, resources, and options files.
To explore my hard drive configuration, you would go to /sys/class/block - which is where block devices are described. Since my primary drive is designated as sda1, we'll follow through to /sda and then /sda1. From here we could examine details like partition, size, and status.
The /proc system has a somewhat different design. To learn about some system drives, for instance, we could move to /proc/sys/dev [ls] and then down to cdrom. Reading the contents of the info file tells us that our CDRom drive - it's actually an RW DVD - is called SR0 and runs at 12x speed.
But if we head back up to the /proc top directory, we'll also see files like cpuinfo - which, using the text reader program less, identifies the type of processor we're running: in my case, since I'm using a quad core processor, each of the four cores is listed separately. Similarly, the devices file contains information about block devices like my hard drives.
Beyond the files in /sys and /proc, the /dev directory contains another pseudo file system created during the boot process. As you can probably tell from their filenames, each of these files represents a specific hardware device. We'll spend more time with the files in /dev when we learn about mounting and unmounting devices in later videos.
Just to quickly review what we've seen: Linux populates three pseudo filesystems with configuration data defining your hardware environment. Information about physical devices and their drivers can be easily found beneath the /sys directory - and especially beneath /sys/class - and beneath /proc.../sys.../dev. The /dev directory contains symbolic links to actual devices through which you can control their use and accessibility.
Using Linux command line tools and modules to manage hardware
Besides those virtual files, you can access the same system hardware and driver information using some command line utilities. So, for instance, if you wanted to list all the PCI devices currently known to your system, you could simply run lspci - where "ls" stands for list. Here, for instance, you can see my Radeon video controller and, down at the end of the list, my two network cards. Running lspci with the -vvxxx argument will display a great deal more information and can be very useful for diagnosing missing or misbehaving devices. V stands for verbose, and xxx will show a hexadecimal dump.
Running lsusb will display all the registered usb devices, including my keyboard and mouse.
lsmod will display all the kernel modules that are currently loaded. In fact, lsmod is really nothing more than a nicely formatted output of the /proc/modules file.
I should take a moment right now to explain what a module is. The Linux kernel is, as you might imagine, the core of the operating system, containing many of the basic instructions for managing processes and system resources. But, in order to extend the kernel's control over peripheral devices, individual modules can be added or removed without directly effecting the kernel or its operations. We'll learn more about actually loading and removing modules in the next video. While details shouldn't concern us right now, you should at least be aware of the D-bus message bus system that permits integration and proper coordination between processes running desktop - rather than server - applications
To review: lspci will list all devices connected to the PCI bus, lspci -vvxxx will include more detailed information in its output, and lsmod will display currently loaded kernel modules. By the way, it can be worth knowing that running lshw - preferably with sudo - will output all your hardware specs at once.
Until now, we've discussed the way that Linux manages devices. But you must also be aware of the various tools employed by Linux to handle processes: that is, how access is given to individual programs so they can play nicely with each other and share system resources.
In the beginning there was init. Init acted like an air traffic controller, waving processes through busy air space and determining the order by which waiting programs would be permitted to take off. Init's greatest weakness was that it could only process actions synchronously, which proved very inefficient.
Somewhere around 2006, the developers behind Ubuntu released an asynchronous replacement for init called Upstart, which has been the mainstay of some of the biggest distributions in the business. Most. But not all. Over the past year, after some rather excited debate and more than a few spoiled friendships, Upstart's competition, systemd, won over so many hearts and minds, that even Ubuntu has now agreed to make the switch. The final move took place with the release of Ubuntu 15.04. We'll talk about some systemd tools later in this series.