The second course in this Linux certification series (the first was a series introduction, and the third will focus on boot and package management) focuses on System Architecture. It explores how Linux works within its hardware environment and how you can use Linux tools to optimize your system for your specific needs.
You'll learn how to identify and manage hardware peripherals and how the Linux boot process and runlevels work and how you can control them.
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
Many of the most painful memories I have of those dark days before I discovered Linux, involved trying to figure out why I sometimes couldn't get even integrated sound cards or network interfaces to work. It's possible that the way Microsoft handles such things has improved over the years, but creating an operating system that can anticipate and successfully identify a very wide range of peripheral devices is no small accomplishment. In my experience, Linux, by shipping with large module libraries and by separating the kernel from the device modules - which, by the way, has always allowed most changes to complete without the need for a reboot - has always been ahead of the game.
Understanding and managing Linux system modules
For a device to work with Linux, it must have a defining module that's been loaded into the kernel. Modules for the vast majority of devices on the market today are already installed on Linux distributions by default. For those modules that aren't already installed, there are excellent management tools we'll see in a moment.
First of all, though, you can list all the modules currently loaded into the kernel using lsmod, but Linux installations will usually also keep all kinds of modules that you might one day need under /lib/modules. For some reason, many systems now no longer support the really handy modprobe -l tool - which listed all modules that were potentially available. Instead, you'll have to use something like this: where "find" will search the specified directories for files containing the ko extension. Let's take a look at /lib/modules so we can better understand what uname -r does. Each of the directories that we can see here is named after a specific Linux kernel release. 3.13.0-49 is, right now, the most recent kernel I've got installed on the machine. Wisely, Linux will save older versions of the kernel on the system - along with their device modules - in case something should ever go wrong with an upgrade and we should ever need to take a step back to a previous point in time.
Uname, as you can see, is a Linux command that outputs the current system which, predictably, is Linux. Uname -r will output the current kernel version. Therefore, if you want to use find to display all the available modules associated with our current kernel, you would use uname -r to insert the kernel name into our directory location.
This ability to combine system information with basic commands is part of the power - and the fun - of Linux. So while getting rid of the simple and straighforward modprobe -l function seems to have been just a little bit odd, we're certainly not left without other perfectly usable options.
Now that we've learned how to find modules - both loaded and available - we should also know how to add or remove modules. This can be really useful when you're working with very new hardware whose drivers haven't yet made it into standard Linux installations and need to be added manually. It can also save the day when you're working with hardware that's so old it's no longer supported. Some time ago, I found myself working with modules while trying to get a mini-pci WiFi radio working with some development boards running a very minimal Debian installation. Downloading and adding the right module to the kernel worked nicely - although I should warn you that kernel upgrades will ignore the module and you'll need to do it all over to get your device running again.
So let's try it out by installing the lp module. Of course, since it happens to be installed already, We'll have to remove it first. We'll begin with lsmod and grep, to confirm that lp is currently loaded. Then, as the sudo admin, we'll run modprobe -r to remove the module. Another round of lsmod should convince us that that worked. And now we'll reload it...run lsmod once more, and everything is once again just as it should be.
Let's review. You can use lsmod to list all the modules currently loaded on the kernel. A clever use of the "find" command will list all available modules. Modprobe followed by the module name (without its .ko extention) will load a module and modprobe -r will remove it. By the way, insmod is another command you can use to load a module, and rmmod will, like modprobe -r, remove a module. udev, Linux dynamic device management, is the system tool that, you guessed it, manages devices. You can manually change some udev settings through .rules files kept in one of three directories: Udev will read files from these three directories in that order, with precedence given to the one read earliest. Since /run is a pseudo filesystem that's re-written whenever the computer is rebooted, you can use rules files in /run for changes you want to stay in effect only for this session.
Let's take a look at /etc/udev on my system. You can see my printer's scanner rule - which was provided by Brother itself to allow their all-in-one to run on Linux. This rule has the number 40 - meaning it will be executed by udev before the other two rules with higher numbers. If, for some reason, you ever wanted to make sure that this rule was executed later, you could simply change the number in the filename to, say 90.
Let's use cat to actually read the 70-persistent-net.rules file. The file points to my two network interface cards, identifying their MAC addresses and, significantly, assigning them system designations: eth0 and eth1. If you like, you can edit either of these entries to give them new values, say eth3 or em0. As I discovered some time ago, replacing a failed network card might cause udev to give it a designation that's not the same as the one used by the original card. That could cause services to break if you have software that's expecting to find the network through eth0, but that's now been moved to eth3. Editing this rule can return your system to its original, happy working state.
Since we've mentioned default device designation (like eth0), it's probably a very good time to review the way that Linux assigns names to all its devices. As we've seen, network interfaces are given eth names, beginning with eth0 and moving up. Recently, some systems have changed that to em0, em1 and so on.
Hard disk drives (including solid state drives) are usually named sda, sdb, etc. However, individual partitions on a disk might be named sda1, sda2, sda3, etc. Floppy drives - if you can still find any - are usually called fd0 and fd1. And CDRom or DVD drives will usually be designated sr0 and sr1.
Again, all of these designations are controlled and managed by udev, and have symbolic links that you can find in the /dev directory. If you're unsure how, say, Linux has named a USB data drive you've just inserted, you can view the dmesg log (by running dmesg from the command prompt) and look for recent entries. In this case, we can easily see the the USB drive that I just plugged in. lsusb will also list all your USB devices and hubs. The drive I just plugged in is mentioned here.
Accessing environment data on AWS instances
Most of what we've seen regarding system hardware in these first two videos of this course will have little connection to the cloud computing world because managing system hardware on an Amazon Web Services virtual machine is largely the responsibility of our host, Amazon. Still, AWS does give you access to a great deal of system information through Instance Metadata, which you can access from a shell session inside the instance using the curl tool and the special IP address, 169.254.169.254.
This request, for instance, will return a list of available metadata categories. Amazon documentation provides plently of guidance for finding and making use of metadata. And don't worry, we'll learn much more about using ssh to access AWS instances - and other resources - later in this series.
David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.
Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.
Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.
His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.