The course is part of this learning path
This course provides an overview of NGINX, it's primary use cases and key features.
This course is intended for people who need to be familiar with NGINX use cases and the high-level capabilities it brings to its end users.
We start by exploring the origins of NGINX in the "Getting Started Module".
Next, we learn how to go about installing NGINX on various operating systems and with different options.
In Module 2, 3, and 4 we explain the basics of the configuration language of NGINX - and how to set up NGINX as a Web server.
In order to get the most out of the course, you need to have a general understanding of Web servers and how they work. To work with NGINX Plus on the command line, you need to be familiar with Linux and how to move between directories on the command line. You also need to be able to edit files with a command-line editor such as nano, vi or vim. Our labs use the vim editor. You need to understand the basics of HTTP and TCP/IP, and you should also have a basic knowledge of networking.
After completing this course you will be able to:
- Describe the most common use cases of NGINX
- Describe the differences between NGINX F/OSS and NGINX Plus
- Execute basic NGINX commands
- Locate your NGINX configuration file(s)
- Describe the role of contexts, blocks, and directives in your configuration file(s).
- Identify the server block that will respond to a request
- Identify the location block that responds to requests, identify location processing rules, and configure a simple web server that serves static pages and images.
- [Instructor] Welcome to Module Two of the NGINX Core eLearning course. This module covers Exploring Configuration Contexts. After completing this module, you will be able to execute basic NGINX commands, locate your NGINX configuration file, describe the role of contexts, blocks, and directives in your configuration file. The configuration files have a file extension of .conf. These files are where we define directives to control the behavior of NGINX. For example, to configure a reverse proxy, we would use the proxy pass directive to indicate where to send client requests. The main configuration file, nginx.conf, is located in the file path /etc/nginx. This file consists of a set of default directives when NGINX is installed. Although you can modify the behavior of your NGINX instance by editing this file, most users create additional configuration files and place them in the /etc/nginx/conf.d directory. These are some of the most basic and often used NGINX commands. The nginx -v command returns the version of NGINX you're running. The nginx dash lowercase t command performs a syntax check on your current configuration files. It returns any issues with the configuration in the terminal. The nginx dash capital T command displays the configuration that is currently being used by NGINX again in the terminal. One note, if your user does have root privileges, use sudo when executing NGINX commands or editing the configuration files. You reload NGINX in order to load in a configuration change. The reload command nginx -s reload starts this process, but first checks the configuration syntax. If there's an error, the reload does not happen and you are notified in the command line. In this case, NGINX defaults to the previous configuration it holds in memory. This makes the reload command safe. If the configuration syntax is correct, the reload does happen and NGINX sends a SIGHUP signal to the Linux kernel which creates a new process ID.
The reload command also does not drop any active connections. The master process forks a new set of worker processes to handle new connections while old worker processes complete their tasks and shut down gracefully. Because the process ID changes each time NGINX reloads, you have this information in the logs which can assist in troubleshooting. A configuration file consists of many combinations of contexts and directives. Contexts in the NGINX include main, events, HTTP, server, and location as well as others outside the scope of our discussion here. Contexts are containers for directives. NGINX's context form a hierarchy. We focus primarily on the server and location contexts in this course. Together, the server and location contexts configure how NGINX responds to specific HTTP requests. This is not a complete list of contexts. There is a stream context at the same level as the HTTP context for instance. Stream and other contexts are out of scope for this course. Let's take a closer look at each of these configuration contexts. The main context is where we define the highest level directives for an NGINX instance. Here you set things like the number of worker processes, the Linux username, the location of the process ID, and the log file location. In this beginning course, we use the default configurations already set in this context. The events context is used to manage connection processing directives. For example, the number of connections per worker process. In this beginning course, we use the defaults for this context. The HTTP context defines how NGINX handles HTTP and HTTPs connections. For example, we can set the address for a pool of backend application servers that NGINX proxies do. Directives within the HTTP context are inherited by its children contexts, upstream, server, and location. The server context defines a virtual server, also known as a virtual host, which processes a given HTTP request. The virtual server definition can be a domain name, an IP address, or a Unix socket.
The location context further defines how the virtual server processes an HTTP request based on the HTTP request URI. For example, a location context can point to a path on a file system. A location can also be determined by matching the request URI to a stream defined in that context. The upstream context defines a group of backend application servers or web servers for NGINX to use in a load balancing use case. The stream context defines how NGINX handles Layer 3 and/or Layer 4 network traffic such as TCP or UDP connections. A directive is a single statement that controls a given NGINX feature. A block is a grouping of directives encased in curly braces. Here we show a server context or block with both a listen and root directive. We will define these directives in more detail later, but the listen directive is telling NGINX to listen on Port 80 and the root directive is pointing to the web server's content. Directives may or may not have parameters. The following are a few listen directive parameters. These parameters define how NGINX handles incoming traffic. There are also parameters for other directives, for example the upstream server directive, the proxy cache directive and more. There are three directive types. The standard directive is the most commonly used directive. The array directive is a directive that can accept and list multiple values. A command directive executes in the current context. The listen and root directives are common examples of a standard directive. The access_log directive is a good example of an array directive. The proxy_pass directive is a common example of a command type directive. Directives are often valid in a variety of contexts. When a standard or array type directive is placed in a context, child contexts inherit their configuration. As such, an index directive placed in the HTTP context will apply to all server and location contexts beneath it. If a directive exists in both a parent and child context, the directive in the child context overrides the directive inherited from the parent. As such, if there are index directives in both the HTTP and location contexts, NGINX will use the configuration from the location context.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.