How to get started with Chef

When you have dozens or even hundreds of machines to manage, manual just isn’t an option. Software updates, security patches, and changes on this scale require automated tools to handle these important tasks in a way that is timely and consistent. Enter automated configuration software platforms like Chef. If you’ve ever wanted to automate software installation or server configuration across all of your servers with one command, you’re in the right place.  In this post, we’ll give you a peek at the Cloud Academy course on Chef on how to get started with Chef
Whether you’re a DevOps Engineer, Site Reliability Engineer, Systems Administrator, or Developer, this course can help you learn how to use Chef for managing your infrastructure at scale.

Get started with Chef 

Before getting started, it’s worth noting that Chef is both the name of the configuration management software you’ll be learning about in this course and the company that created it. For the purposes of this post, we’re referring to Chef the software.
Chef is a platform that allows you to automate the creation, configuration, and management of your infrastructure. This includes installing software, making sure services are running, setting firewall rules, and other similar tasks. For example, you can tell Chef that you want all of your web servers to have the Apache web server installed and running, and Chef will make sure that happens.You can also use Chef to create entire cloud infrastructures. For example, if you need a cloud environment that runs virtual machines in an auto scaling group behind a load balancer, you could use Chef to create all of that.

How is Chef used?

To give you a better idea of what Chef is capable of, let’s look at how some Chef customers use the platform.
Media and marketing company Gannett was taking days, and in some cases even weeks for their deployments. They didn’t have visibility into what was actually being done, and by whom. Chef came into the picture when an engineer needed to replicate a production AWS environment, and lacked a way to do it consistently. When other engineers saw how he had solved the problem with Chef, it sparked conversations about how to push for more automation. The end result was a push toward DevOps methodologies, and Chef served as the core automation tool.
Deployments that previously took days are now done in minutes because all of the infrastructure changes that they need to make are specified in code. When you have infrastructure defined as code, it allows you to apply years of software engineering practices to your infrastructure.
This scenario isn’t uncommon. A lot of companies are still doing things manually, and deploying software on an infrequent schedule. Sometimes, all it takes is seeing a potentially better way.
Facebook uses Chef to manage tens of thousands of servers. This means that with one command, Facebook can ensure that thousands of servers are configured as needed, and in exactly the same way. Another great example is the company Riot Games, creators of the game League of Legends. They have servers running 24/7, all around the world, and they use Chef to automate the management of those servers.
As you know, manual efforts do not scale well, but automation does. With Chef, you have a clean and consistent way to automate the management of your infrastructure. Once you have the desired configuration written in code, it doesn’t matter how many servers you need to manage because the same code will be executed on all of them.
content

Get started with Chef: Chef architecture

Chef has three core components: Workstations, the Chef Server, and nodes. This is how the components work together: The Workstation is where you write the code that specifies how different nodes should be configured. And once that code is written and tested, it can be deployed to the Chef Server. Once the code is on the Chef Server you can instruct the nodes to download the latest code from the Chef Server, and then execute it.
Let’s drill down a little further into each component:

Workstation

The Workstation is where you write the code that specifies how different nodes should be configured. In Chef terminology, a Workstation is a computer where the Chef Development Kit is installed. It’s where you actually write and test the code that Chef uses to manage servers. It’s also used to interact with the Chef Server. The development kit contains everything required to develop with Chef. Part of the installation is a set of tools that are used to interact with the different components of Chef.
Some of the most important tools are:
The “chef” executable is used to generate different code templates, execute system commands under the context of the Chef Development Kit, as well as install gems into the development kit environment. (RubyGems is a package manager for Ruby, and the individual package is referred to as a gem.) So, the “chef” executable used to help with development related tasks.
The chef-client is a key component of Chef because it’s the driving force behind nodes. It is an executable that can also be run as a service, and it’s responsible for things like registering a node with the Chef Server, synchronizing code from the Chef Server to the node, and making sure the node that is running the chef-client is configured correctly, based on your code. The chef-client is a part of the development kit, in addition to being run on nodes. It can be used to configure any node it runs on. So, if you’re going to test your code, you’ll need the chef-client.
Ohai is required by the chef-client executable. Ohai gathers attributes about a node, such as the operating system and version, network, memory and CPU usage, host names, configuration info, and more. It is automatically run by the chef-client, and the information it gathers is accessible to you in code. Having that information allows you to do things like conditionally execute code, or set configuration file values. The attributes that Ohai collects are saved on the Chef server, which allows you to use them to search for nodes based on the values. For example, if you wanted to search for all nodes running inside of AWS, you could search based on the EC2 attributes that Ohai automatically collects, and saves to the Chef Server.
The “knife” is a versatile tool that serves multiple roles. It is used to interact with the Chef Server and includes tasks such as uploading code from a workstation, setting global variables, and a lot more. The knife is used for installing the chef-client on nodes. It can even provision cloud infrastructure.
Kitchen is an important tool that is used for testing the Chef code you develop. Testing is an important requirement for any configuration management software. Chef gives you the power to run commands across your entire infrastructure at the same time. This means that you could update thousands of servers at once. If you’re executing untested code, you run the risk of breaking some of the components of your infrastructure with the push of a button. Kitchen allows you to test your code against one or more virtual machines at the same time.
Berkshelf is a dependency management tool that allows you to download code shared by the Chef community. As you begin to develop with Chef, you’ll find yourself running into problems that have already been solved by other developers. Developers frequently share their work on the Chef Supermarket so you don’t have to develop the functionality for yourself.

Chef Server

Chef uses a client-server architecture, which means there are two components: a server and zero or more clients. The server component of the client-server architecture in this case is the Chef Server. It is a central hub where all of your automation code lives. It’s also responsible for knowing about all of the nodes that it manages. At its core, the Chef Server is a RESTful API that allows authenticated entities to interact with the different endpoints.
For the most part, you won’t need to interact with the API directly. At least not in the early stages while you’re still learning Chef. Instead, you will use one of the tools built around the API. There is a web-based management UI that allows you to interact with the API via a browser, or the chef-client, which is the way nodes interact with the API. They grab whatever they need from the server and then perform any configuration locally on the node itself to reduce the amount of work the server needs to perform. Finally, there is the knife command, which you may recall is used to interact with the Chef Server.

Nodes

Chef refers to clients generically as nodes. Because Chef is capable of managing all kinds of different devices, nodes are any type of machine capable of running the “chef-client” software. It could be a physical server, an on-premise or cloud-based virtual machine, a network device such as a switch or router, and even containers.
In order for Chef to configure a node, the chef-client needs to be installed on the node. The chef-client is responsible for making sure the node is authenticated and registered with the Chef server. The chef-client uses RSA public key-pairs to authenticate between a node and the Chef server. Once a node is registered with the Chef server, it can access the server’s data and configuration information.

Next Steps

So far, we’ve covered Chef at a high level. In our course how to get started with Chef we’ll dig deeper into the Chef Workstation to talk about chef-repo and Cookbooks, and we’ll show you how to create your first recipe, configure the Chef Server for basic development, and a lot more.
By the end of the course you should be able to:

  • Understand the use cases for Chef
  • Explain the Chef architecture
  • Describe the components of Chef
  • Create a simple cookbook

Getting Started with Chef is free for Cloud Academy subscribers. If you’re not already a Cloud Academy member, we invite you to try Cloud Academy with our free 7-day trial where you’ll have full access to Cloud Academy video courses, hands-on labs, and quizzes throughout the trial period.

Written by

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless