The course is part of this learning path
This course provides an overview of NGINX, it's primary use cases, and key features.
This course is intended for people who need to be familiar with NGINX use cases and the high-level capabilities it brings to its end users.
We start by exploring the origins of NGINX in the "Getting Started Module".
Next, we learn how to go about installing NGINX on various operating systems and with different options.
In Module 2, 3, and 4 we explain the basics of the configuration language of NGINX - and how to set up NGINX as a Web server.
In order to get the most out of the course, you need to have a general understanding of Web servers and how they work. To work with NGINX Plus on the command line, you need to be familiar with Linux and how to move between directories on the command line. You also need to be able to edit files with a command-line editor such as nano, vi or vim. Our labs use the vim editor. You need to understand the basics of HTTP and TCP/IP, and you should also have a basic knowledge of networking.
After completing this course you will be able to:
- Describe the most common use cases of NGINX
- Describe the differences between NGINX F/OSS and NGINX Plus
- Execute basic NGINX commands
- Locate your NGINX configuration file(s)
- Describe the role of contexts, blocks, and directives in your configuration file(s).
- Identify the server block that will respond to a request
- Identify the location block that respond to requests, identify location processing rules, and configure a simple web server that serves static pages and images.
- [Lecturer] Welcome to the NGINX Core e-learning course. In order to get the most out of the course, you need to have a general understanding of web servers and how they work. To work with NGINX Plus on the command line, you need to be familiar with Linux and how to move between directories on the command line. You also need to be able to edit files with the command line editors such nano, vi, or vim. Our labs use the vim editor. You need to understand the basics of HTTP and TCP/IP, and you should also have a basic knowledge of networking.
- [Lecturer] After completing this introductory course, you will be able to describe primary NGINX use cases, install NGINX, describe the role of context and directives in NGINX, manage server and location selection, and serve static content.
- [Lecturer] Agenda. This course provides an overview of NGINX, its primary use cases, and key features. It's intended for people who need to be familiar with NGINX use cases and the high level capabilities it brings to its users. You'll start by exploring the origins of NGINX in the Getting Started module. Next, you'll learn about installing NGINX on various operating systems and with different options. Module two, three, and four help you understand the basics of the configuration language of NGINX and how to set it up as a web server. What is NGINX? In this module, let's take a quick look at the different things that NGINX can be used for. After completing this module, you will be able to describe NGINX's most common use cases, describe the differences between NGINX free/open source and NGINX Plus. The free version of NGINX is an open source web server, reverse proxy server, cache server, load balancer, media server, and more. It started out as a web server designed for maximum performance and stability. In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email traffic, and the reverse proxy and load balancer for HTTP, TCP, and UDP servers. The enterprise version of NGINX, also known as NGINX Plus, has exclusive production-ready features on top of what's available in the open source offering including status monitoring, active health checks, a configuration API, and a live dashboard for metrics. Igor Sysoev wrote NGINX to solve the C10K problem. The C10K problem was a concurrency problem with web servers in the early 2000s. During that time, web servers had difficulty handling 10,000 or more concurrent connections due to blocking multi-threaded architectures. Igor solved the problem by building an event-driven solution that can handle thousands of concurrent connections. In 2004, the NGINX source code became available. After many changes and additions from the open source community, Igor founded the company in 2011 and is the current acting CTO. In 2013, NGINX released its first enterprise product, NGINX Plus. It still boasts the same raw performance and scalability benefits of the open source product with additional key enterprise features. Today, NGINX powers more than 60% of the world's busiest websites. The NGINX application platform focuses on application delivery control. This begins with NGINX Plus. In this context, NGINX Plus functions as a content cache, load balancer, API gateway, and web application firewall. You use NGINX Plus at the edge of your applications to provide these services. The second piece of the NGINX application platform is NGINX Unit. Unit is an open source application server built to meet the demands of distributed applications. With Unit, you can deploy configuration changes with no service disruptions and run code in multiple languages simultaneously. The third piece of the NGINX application platform is the NGINX Controller. NGINX Controller is a centralized monitoring and management platform. With Controller, you can manage multiple NGINX Plus nodes from a single location. Using an intuitive graphical user interface, you can create new instances of NGINX Plus and centrally configure features like load balancing, URI routing, and SSL termination. Controller allows you to set up alert, dashboards, and other monitoring capabilities to help you monitor application health and performance. The following slides are individual use cases for NGINX Plus. One of NGINX's most basic use cases is a web server. A web server's fundamental job is to accept and fulfill requests for static content hosted on a website. This includes things like HTML pages, files, images, videos, and so on. The requester is almost always a browser or a mobile application. The request takes the form of an HTTP message as does the web server's response. NGINX as a web server delivers static content fast and efficiently. It can handle hundreds of thousands of clients simultaneously while using up to 90% less memory than other web servers. NGINX can also be used as a highly efficient reverse proxy server. A reverse proxy server typically sits behind the firewall in the private network and directs client requests to the appropriate backend servers. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers. You can think of a reverse proxy as a website's public face. Its address is the one advertised for the website, and it sits at the edge of the site's network to accept requests from web browsers and mobile apps for the content hosted at the website. The benefits are two-fold. First, a reverse proxy increases security because no information about your backend servers is visible outside your internal network. This means malicious clients cannot access your servers directly in order to exploit any vulnerabilities. For example, NGINX can be configured to help protect backend servers from distributed denial of service attacks by rejecting traffic from particular client IP addresses or by eliminating the number of connections that can be accepted from each client. The second benefit is that a reverse proxy improves flexibility and scalability. Because clients see only the reverse proxy's IP address, you are free to change the configuration of your backend infrastructure. This is useful if your backend IP addresses change frequently. NGINX can also be configured as a world-class load balancer. Load balancing distributes workload across multiple servers. For a web application, this means distributing HTTP requests across a pool of application servers. Load balancing provides two main benefits. Load balancing lets you scale your web application beyond what you could handle with a single server. Load balancing also provides redundancy so that if a server fails, other servers step in to keep your applications running. There are many ways to configure NGINX Plus load balancing to improve performance. Features include SSL termination, rate limiting, HTTP keep alives, compression, session persistence, and caching. NGINX Plus can also load balance additional protocols and applications such as databases, domain name servers, and authentication servers. NGINX Plus content caching improves the efficiency, availability, and capacity of backend servers. When caching is turned on, NGINX checks to see if a request for cacheable content can be served from the cache. If a piece of content is available in the cache, NGINX serves the request without having to connect to backend servers to retrieve the content. If not, NGINX requests the content from the backend server, add it to the cache, and serves the requested content to the client. Content caching improves the load times of webpages by reducing the load on your backend servers. Cached content can be served at the same speed as static content. Caching also improves content availability because cached content can be used as a backup if your ordering servers fail or cease to respond. Finally, caching increases your site's capacity by offloading repetitive tasks from the backend servers. This frees the backends to complete more tasks. When you have an application developed using the Microservices Reference Architecture, it may have a service mesh infrastructure layer, which makes the communication between microservices flexible, reliable, and fast. A service mesh is often implemented using an NGINX sidecar proxy for each service instance as indicated in this diagram. The NGINX sidecar proxy handles anything that can be abstracted away from the individual services such interservice communication, monitoring, and security-related concerns. Within the service mesh, we refer to two sectors of work: the control plane and the data plane. The control plane is where service instances are created, terminated, and managed as a whole. The data plane is where the work is getting done. This is where the sidecar proxies reside. Each sidecar proxy is dedicated to a specific service instance and communicates with other sidecar proxies. An API gateway is a single point of entry for client API requests. The NGINX API gateway provides that single consistent entry point for multiple APIs regardless of how they are implemented in or deployed to the backend. Not all APIs relate to microservice applications. Our API gateway manages existing APIs, monolithic applications, and applications undergoing a partial transition to a modern Microservice Reference Architecture. The NGINX API gateway provides the following benefits. NGINX performs request routing, routing clients to their API endpoint based on the request URI. NGINX improves performance by caching common responses to reduce load on API endpoints. NGINX manages and secures APIs without the high deployment and operational costs of a full API management platform. NGINX does API authentication using JSON web token validation. The API gateway protects APIs from being overwhelmed with requests and bandwidth limits while rate limiting. Finally, NGINX is customizable. It uses NGINX script or Lua modules for server-side scripting in order to customize NGINX Plus to your unique API needs. Another use case for NGINX is as a Kubernetes Ingress Controller. Kubernetes, previously known as Borg, is an open source container scheduling and orchestration system originally created by Google. Kubernetes automatically schedules containers to run evenly among a cluster of servers, abstracting this complex task from developers and operators. The NGINX Ingress Controller for Kubernetes provides enterprise grade delivery services for Kubernetes applications with benefits for users of both open source NGINX and NGINX Plus. With the NGINX Ingress Controller for Kubernetes, you get basic load balancing, SSL termination, support for URI rewrites, and backend SSL encryption. NGINX Plus users also get session persistence for stateful applications and JSON web token authentication for APIs. Finally, NGINX can act as a web application firewall in combination with the ModSecurity dynamic module. The NGINX WAF is our build of the well-known and respected ModSecurity software. NGINX WAF improves web application security. It focuses on HTTP traffic and inspects all parts of a request for malicious content, known attack vectors, or any other known anomalies. A suspicious packet can be blocked and/or logged depending on your configuration. ModSecurity is available as open source software and is used by over one million websites globally. It is one of the most well-known and trusted names in web application security. ModSecurity uses a database of rules that define malicious behaviors. The NGINX Plus with ModSecurity WAF supports the OWASP ModSecurity Core Rule Set, guarding against attacks such as SQL injections, cross site scripting, local file inclusion, and other types of attack vectors. It also guards against scanners and bots, DDoS attacks, dangerous IPs, and more. Thank you for completing what is NGINX. In the next module, you'll install NGINX and NGINX modules.
About the Author
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.