1. Home
  2. Training Library
  3. Cloud Computing Fundamentals
  4. Courses
  5. LPIC-1 102 Linux certification - Linux Administration (3 of 6)

Job Scheduling

The course is part of this learning path

Contents

keyboard_tab
Linux Administration
play-arrow
Start course
Overview
DifficultyIntermediate
Duration27m
Students494

Description

In this brief course - the third of six covering the LPIC-1 102 exam and the eighth of eleven courses making up the full LPIC-1 series - we will work with some key areas of Linux administration: managing users and groups, scheduling the execution of unattended tasks, and maintaining accurate and efficient system environment variables and localisation.

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

While you can certainly execute a Linux process directly from the command line or through a script, there will definitely be times when that's not practical. Perhaps, for instance, you'd like to make sure that your server data is backed up late each night at a time when no one is likely to be logged in and working, for when you yourself are planning to be comfortably sleeping in bed or perhaps you just want to make sure that the regular backups actually happen and aren't forgotten.

Scheduling Linux tasks with cron and at

Whatever the need, unattended jobs can easily be scheduled using either the cron system or the At command. But first we'll talk about cron. The key file holding the whole thing up is crontab, which lives in the /etc directory. crontab is basically a script that regularly checks the cron.daily, cron.weekly, and cron.monthly directories, all of which exist in /etc for their own scripts. As you can easily guess on your own, scripts that have been saved to cron.daily are supposed to be run each day. Those in cron.weekly are to be run each week, and cron.monthly scripts are to be run once a month.

If you'd like your script to run once a day and you're not particular exactly when it's run, then you'd simply save the script to the /etc/cron.daily directory, make sure it's executable using change mode, and crontab will do its magic from then on. Let's take a closer look at the contents of my crontab file. You'll notice that the first line is commented out with the hash symbol, but it provides the column headers. The first column, therefore, represents how many minutes into the hour you'd like the command that follows to execute. In the case of the first command, that would be 17 minutes. The second column, H, describes in which hour of the day you'd like it to happen. The second through fourth lines will all take place in the sixth hour of the day, but our first command, because it contains a star, will execute in the 17th minute of every hour of the day. DOM stands for Day of the Month. Again, the star means that it will happen every day. The MON column sets the month, and, again, the star tells cron that it should happen in every month. And the final scheduling column sets the day of the week, meaning even if you set the command to execute on every calendar day of the month, it will only happen if the calendar day happens to fall out on a specified day of the week.

As you can see, the cron.weekly job will only execute on day seven, which is Sunday. Note, by the way, that crontab expects minutes in an hour to be numbered from 0 to 59, hours in a day from 0 to 23, days of the month from 1 to 31, and days of the week from 0 to 6. In this last case, however, Sunday can be represented by either zero or seven.

Now let's look at the commands themselves. In each case, the system will CD to the root directory, and run all the files in the indicated directory using run-parts. The --report parameter will output the results of the command. The final three lines will first test for the existence of executable status of the anachron file. Only if it does not exist will it then run the files in /etc/cron.daily, weekly, and monthly.

Although the LPIC exam in its current form doesn't yet require this, you should probably be aware that crontab functions can now also be performed on computers running systemD by systemD timers.

You can't completely understand the cron system without awareness of a few other configuration files. The files found in the /etc/cron.d directory, for instance, work much the same way as crontab entries, but they also have a user field, populated by Root in this case. This is because cron.d scripts are run as a particular user, rather than automatically as Root.

Working with anacron jobs

Finally, the /etc/anachrontab file is the configuration file for Anacron. Anachron runs regular jobs pretty much the same way that cron does, but it doesn't assume that your computer will always be on. So if, for example, you have a regularly scheduled cron task that's meant to run in the middle of the night, but you often turn your computer off for the night, then cron won't be very helpful for you. To deal with this, the Anacrontab file doesn't offer you the option of scheduling tasks for specific times during the day, but rather once daily, weekly, or monthly. Anacron will take care of the rest. Each line in the Anacrontab includes two columns of numbers. The first number is the number of days between executions. One, obviously, would indicate that the job is to be executed each day, and seven that it should be run every week. The at monthly value is a special value indicating one execution per month.

The second column tells Anacron how long to wait after a system boot before executing the command. The third column is the job identifier, and the final column contains the command line itself, which, in this case, is no different from what we found in the crontab file.

It's time to review what we've seen so far. Scripts can be executed on a schedule set in the /etc/crontab file, schedules are set by minute, hour, day of the month, month, and day of the week. Scripts found in the cron.daily, cron.weekly, and cron.monthly directories are traditionally executed either through crontab or the /etc/anacrontab file, which runs jobs a set time after system boot. Jobs associated with specific users besides Root can be executed through the /etc/cron.d directory.

Now let's discuss At. If you need to run a process just once, but you're not going to be around to start it manually, you can use the At command to schedule it. I should note that At is not always installed by default, but can be added to your system using apt-get install at.

At is set up in two stages. You must first set the time you'll want the job to run, and then the job details themselves. Scheduling is actually completely intuitive. Your scheduling command begins with the word at, which can be followed by an execution time that's relative to the current time. So, for instance, you can type in now plus 10 minutes, which will start the process exactly 30 minutes from now. You can also use absolute local system times like 1430 meaning 2:30PM, which will run the command the next time the system clock hits 1430, whether that's today's 1430 or today's 1430 has already passed, tomorrow. Plain English expressions will also work. Thus, at noon will run at 1200 hours. At midnight, at 0000 hours, and at tea time, believe it or not, will run at 1600 hours, which is when those under the influence of her Britannic Majesty's Commonwealth take their mid afternoon break. At will also understand 2:00PM tomorrow or 5:00PM today.

Now that we know how to set the schedule, let's create an actual job. We'll use at 1330 to order up a job for 1:30 this afternoon. Now, we'll simply type a command, say, a script located in my home directory called logrotate.sh. You can hit enter if you'd like to add another line of code or, when we're done, control+d to finish the job configuration. Now, if nothing changes and, assuming that my computer will actually be running at 1:30, my script will be executed.

We can list all pending jobs with their ID numbers using at q. And we could remove a job using at rm, followed by the job's ID. Batch works much the same way as At, but will only execute if the system load levels permit it. Not that I suspect you ever would, but be careful not to confuse our Linux batch with the old DOS batch file scripts, which of course weren't nearly as much fun.

The directories in /var/spool/cron are where pending at or crontab jobs belonging to regular users are kept. We'll need to be sudo to enter these directories, so I'll run sudo su. You can control which users can submit At jobs through the at.allow, or at.deny files in the etc directory. at.allow is a white list, meaning that if it exists, then only those users listed will be permitted to use At. If at.allow does not exist, then the at.deny file will act as a black list, preventing any user listed from using At, permitting everyone else. You can see that my at.deny file contains a long list of mostly system users, but all normal users are not blocked. The cron.allow and cron.deny files provide similar control access for cron jobs.

Let's review the At command. You first schedule a job using relative time designations like now plus 20 minutes or now plus 2 hours, or absolute designations, like today at 2:30PM, noon, or 1445. Once you've scheduled a job, you can enter just about any command or list of commands on the command line, and hit control+d to finish. At Q will list all currently pending jobs, while at rm and the job ID will delete a pending job. At or crontab jobs belonging to regular users are kept in /var/cron, and access to the At or cron systems can be controlled through the at.deny, at.allow, cron.deny, and cron.allow files.

About the Author

Students15786
Courses17
Learning paths2

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.

Covered Topics