The course is part of this learning path
In this lesson, we will cover the high level architecture of Chef including the three core components of Chef: workstations, Chef Server, and nodes.
We will define what workstations are in Chef and how they handle servers.
You will learn at a high-level about Chef Server, and its two components: a server and zero or more clients. We will explain what the server component is and what it is responsible for in the Chef Server. Then, we will get into nodes, the machines running the software.
We will get a bit more granular in defining the three components, and provide a walkthrough of each:
- Workstations: a computer with the Chef Development Kit installed, which includes: the “chef” executable, the chef-client, ohai, knife, Kitchen, and Berkshelf.
- Chef Server: a RESTful API backed by a Postgres database that allows authenticated entities to interact with the different endpoints.
- Nodes: a generic name for a device that is capable of running the chef-client, which could be a physical server, an on-prem or cloud-based virtual machine, a network device such as a switch or router, and even containers.
Welcome back! In this lesson I’ll cover the high level architecture of Chef.
I’ll cover the three core components of Chef. Which are Workstations, the Chef Server, and nodes.
Let’s start with a high level overview of these components, before drilling into each more in depth.
In Chef terminology a Workstation is a computer where the Chef Development Kit is installed. It’s where you actually write and test the code that Chef uses to manage servers. It’s also used to interact with the Chef Server.
So, what exactly is the Chef Server?
Chef uses a client-server architecture, which means there are two components: a server and zero or more clients. The server component of the client-server architecture in this case is the Chef Server.
The Chef Server is a central hub where all of your automation code lives. It’s also responsible for knowing about all of the nodes that it manages.
Chef refers to clients generically as nodes. You might be thinking of nodes in your head as just servers. However Chef is capable of managing all kinds of different devices. So nodes are any type of machine capable of running the “chef-client” software.
So these are the three core components that make up Chef. The Workstation is where you write the code that specifies how different nodes should be configured. And once that code is written and tested, it can be deployed to the Chef Server.
Once the code is on the Chef Server you can instruct the nodes to download the latest code from the Chef Server, and then execute it.
Let’s drill into each of these components a bit further, starting with the Workstation.
I said that the Workstation is a computer with the Chef Development Kit installed. So, what exactly is in the development kit? The development kit has everything required to develop with Chef. And part of the installation is a set of tools that are used to interact with the different components of Chef.
Let’s run through some the important tools.
First up is the “chef” executable, which is used to generate different code templates, execute system commands under the context of the Chef Development Kit, as well as install gems into the development kit environment. Ideally you’re already familiar with Ruby and gems. However if you’re not, then all you need to know about it at the moment is that RubyGems is a package manager for Ruby. And the individual package is referred to as a gem. So, the “chef” executable used to help with development related tasks.
The next tool worth mentioning is the chef-client. The chef-client is a key component of Chef, because it’s the driving force behind nodes. The chef-client is an executable that can also be run as a service, and it’s responsible for things like: registering a node with the Chef Server, synchronizing code from the Chef Server to the node, as well as making sure the node that’s running the chef-client is configured correctly, based on your code. You may be wondering why I’m talking about it here, since it’s run on the nodes. The reason is that the chef-client is a part of the development kit in addition to being run on nodes. It can be used to configure any node it runs on, so if you’re going to test your code, then you’ll need the chef-client.
Another important tool is called “ohai,” and it’s required by the chef-client executable. Ohai is a tool for gathering information about the node that it’s being run on. It gathers attributes about a node, such as the operating system and version, network, memory and CPU usage, host names, configuration info, and more. Ohai is automatically run by the chef-client, and the information it gathers is accessible to you in code. Having that information allows you to do things like conditionally execute code, or set configuration file values. The attributes that Ohai collects for a node are also saved on the Chef server, which allows you to use them to search for nodes based on the values. For example if you wanted to search for all nodes running inside of AWS, you could search based on the EC2 attributes that Ohai automatically collects, and saves to the Chef Server.
The next tool to talk about is “knife,” which is a versatile tool that serves multiple roles. Knife is the tool used to interact with the Chef Server. That includes things such as uploading code from a workstation, setting global variables, and a lot more.
It’s also the tool for installing the chef-client on nodes. It can even provision cloud infrastructure. And these are a just few of its use cases. I’ll be using knife throughout the course, so you’ll get to see some of its uses in action.
One of the tools I consider to be very important is called Kitchen, and it’s used for testing the Chef code you develop. Testing is an important requirement for any configuration management software. Chef gives you the power to run commands across your entire infrastructure at the same time. That means you could update thousands of servers at once. So if you’re executing untested code you run the risk of breaking some of the components of your infrastructure, with the push of a button. Kitchen allows you to test your code against one or more virtual machines at the same time. I’ll be showing you how to use Kitchen later in the course.
The final tool that I want to cover is called Berkshelf. It sounds a bit like “bookshelf” and that’s the idea. Berkshelf is a dependency management tool that allows you to download code that is shared by the Chef community. As you begin to develop with Chef you’ll find yourself running into problems that have already been solved by other developers. Often times these developers share their work on the Chef Supermarket; allowing you to use their code instead of having to develop the functionality for yourself. I’ll show a bit more about this later.
Let’s drill into the Chef Server next. Recall that the Chef Server is responsible for acting as a central hub that the nodes look to for the latest configuration data.
At its core the Chef Server is a RESTful API that allows authenticated entities to interact with the different endpoints.
The API consists of endpoints for managing the users who are allowed to access the Chef Server; for managing nodes; for getting and setting globally accessible data in what Chef calls “data bags;” for uploading and downloading your automation code; for getting and setting information about your different environments, such as, which nodes are for development, staging and production; and even a search API that can be used to lookup information about nodes, users, global data, and more.
For the most part you won’t need to interact with the API directly. At least not at in the early stages while you’re still learning Chef.
Instead of interacting with the API directly, you’ll use on of the tools build around the API. There’s a web based management UI that allows you to interact with the API via a browser. There’s the chef-client, which is the way nodes interact with the API. They grab whatever they need from the server and then perform any configuration locally on the node itself, to reduce the amount of work the server needs to perform.
There’s also the knife command, which you may recall I said is used to interact with the Chef Server.
So the Chef Server is just a basic RESTful API with a nice web UI backed by a Postgres database. I’ll cover the Chef Server more throughout the course. But for now, I want to switch gears to cover nodes.
A node is just a generic name for a device that is capable of running the chef-client.
So a node could be a physical server, an on-prem or cloud based cloud based virtual machine, a network device such as a switch or router, and even containers.
In order for Chef to configure a node the chef-client needs to be installed on the node. The chef-client is responsible for making sure the node is authenticated and registered with the Chef server. The chef-client uses RSA public key-pairs to authenticate between a node and the Chef server. Once a node is registered with the Chef server it can the access the server’s data and configuration information.
Some of these concepts may still be a bit difficult to follow if you’re new to configuration management software. However they’ll start to become clearer in the coming lessons.
Let’s summarize what was covered in this lesson. There are three key high level components to Chef.
There’s the Workstation that will have the Chef Development Kit installed on it. And it’s where you can do your Chef development as well as interacting with the Chef server via the knife command.
There’s the Chef Server, which is the central hub for configuration data. And at its core it’s just a REST API.
And then there are the nodes, which are devices that are running the chef-client and under the management of the Chef Server.
Armed with your new high level understanding of these three components of Chef, it’s time to drill down even further into the Workstation to better understand the structure and components that’ll allow you to start writing code.
So that’s what I’ll focus on in the next couple of lessons. Alright, if you’re ready to keep learning, then let’s get started with the next lesson!
About the Author
Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.
When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.