"Linux Installation and Package Management" is the third course in this Linux certification series. You'll learn how to control the Linux boot process, how to understand and manage disk partitions and filesystems, and how Linux uses kernel libraries to manage hardware peripherals. We'll also take a closer look at the two most widely used Linux software package management systems: dpkg and rpm.
The previous course in the series covered System Architecture. Next up will be GNU and Unix Commands.
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
The speed and quality of an operating system is largely determined by the way it accommodates the application packages that are meant to run on it. If every office productivity and media viewing package you installed was more or less expected to create its own way of storing and handling files, identifying the local hardware environment, and connecting to the Internet, then there would be a lot less software packages available. And, those that did exist, would be a great deal larger and slower.
Linux ensures that software developers don't have to reinvent the wheel with each program by providing libraries. Now, rather than, say, programming an interface to the filesystem from scratch, developers can simply have their program call a local library that will do it for them.
Working with Linux shared libraries
There are two kinds of libraries: static and dynamic. A program will read a static library as it is initially installed, and incorporate its contents into its own code. This can make the program run a bit slower, but it has the significant advantage of remaining independent of external content once it's running. For a system recovery tool, for instance, this can be a huge advantage.Dynamic libraries must remain available as it is accessed whenever the program needs its information. This, obviously, allows for much leaner and faster tools.
When a package is prepared for a Linux system, it will establish the libraries - or dependencies - that it will need. The various curated software repositories that serve various Linux distributions - like apt and yum - will be aware of these dependencies and will do a great job of making sure that they're all met during the installation process.
However, as an administrator, you will sometimes have to work with software that's not part of the mainstream repositories, and you'll therefore need to handle access to libraries yourself. In this video, we're going to learn how these libraries are designed and how you can work with them for your own projects.As you might imagine, many libraries live in or beneath the /lib directory. From my Ubuntu system, you can see some libraries in /lib itself, and others organized by category in subdirectories. Let's break down a filename to understand how the naming conventions can tell us quite a lot about a library. The file begins with the letters l*i*b - telling us that it's a library. the next section - until the first dot - is the unique library name. s*o means that this is a dynamic library - a static library would have a letter "a", instead. The number after the second dot is the library's version.
Applying the ldd program against a library will print its dependencies - libraries, you will discover, are often built on other libraries. ldd displays a library's dependencies and the locations of their files. By the way ldd, when applied to a binary file - like these in /bin - will also display whether the binary is dynamically linked, and if it is, what its dependencies are.
When a new software package prepares to install itself on Linux, it looks to a file in the /etc directory called ld.so.cache for information on all the libraries currently available to the system. Since this file is not human-readable, there's another file in /etc - ld.so.conf - that mirrors the information from ld.so.cache. On some systems, ld.so.conf will contain thousands of lines of data, but at least on Ubuntu and Fedora, it only contains a pointer to the /etc/ld.so.conf.d directory, which, in turn, contains files pointing to libraries kept elsewhere on the system, like /usr/lib. Either way, with this information, you will know how to find what's available to you.
You can also access the names and locations of all your currently installed shared dynamic libraries by using ldconfig Although, since I have more than 1,700 libraries right now, that might be a lot of reading. As always, you can use our old friend grep to make our lives easier. So, if we were looking to see if anything related to libQtGui might be available, we would run ldconfig -p | grep libQtGui...which will be much more manageable.
If you manually add or change a library, you'll have to give Linux the good news. You can do this either by adding your library to LD_LIBRARY_PATH, or by creating a file in /etc/ld.so.conf.d/ called something like my_lib_name.conf in which you simply point to the directory that will host your library. You then need to run ldconfig -v to update Linux's list - where -v tells ldconfig to be verbose and keep you up on what it's doing.
To review, shared libraries provide important system information to software packages. Dynamic libraries - identifiable by the letters s*o, can be called whenever a program is used, while static libraries - taking an "a" rather than s*o - are used only during installation. Most libraries are kept in or below /lib. ldd will display the dependencies and file locations of either a binary program or a library. /etc/ld.so.cache is where software packages look for a list of system libraries, while we humans can get that information through /etc/ld.so.conf, which points to the files in /etc/ld.so.conf.d/ldconfig -p will display all current libraries, while ldconfig by itself will update the official library list.
David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.
Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.
Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.
His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.