How to Get Started With Chef

When you have dozens or even hundreds of machines to manage, manual just isn’t an option. Software updates, security patches, and changes on this scale require automated tools to handle these important tasks in a way that is timely and consistent. Enter automated configuration software platforms like Chef. If you’ve ever wanted to automate software installation or server configuration across all of your servers with one command, you’re in the right place.  In this post, we’ll give you a peek at the Cloud Academy course on Chef on how to get started with Chef

Whether you’re a DevOps Engineer, Site Reliability Engineer, Systems Administrator, or Developer, this course can help you learn how to use Chef for managing your infrastructure at scale.

Get started with Chef 

Before getting started, it’s worth noting that Chef is both the name of the configuration management software you’ll be learning about in this course and the company that created it. For the purposes of this post, we’re referring to Chef the software.

Chef is a platform that allows you to automate the creation, configuration, and management of your infrastructure. This includes installing software, making sure services are running, setting firewall rules, and other similar tasks. For example, you can tell Chef that you want all of your web servers to have the Apache web server installed and running, and Chef will make sure that happens. You can also use Chef to create entire cloud infrastructures. For example, if you need a cloud environment that runs virtual machines in an auto-scaling group behind a load balancer, you could use Chef to create all of that.

How is Chef used?

To give you a better idea of what Chef is capable of, let’s look at how some Chef customers use the platform.

Media and marketing company Gannett was taking days, and in some cases even weeks for their deployments. They didn’t have visibility into what was actually being done, and by whom. Chef came into the picture when an engineer needed to replicate a production AWS environment, and lacked a way to do it consistently. When other engineers saw how he had solved the problem with Chef, it sparked conversations about how to push for more automation. The end result was a push toward DevOps methodologies, and Chef served as the core automation tool.

Deployments that previously took days are now done in minutes because all of the infrastructure changes that they need to make are specified in the code. When you have infrastructure defined as code, it allows you to apply years of software engineering practices to your infrastructure.

This scenario isn’t uncommon. A lot of companies are still doing things manually, and deploying software on an infrequent schedule. Sometimes, all it takes is seeing a potentially better way.

Facebook uses Chef to manage tens of thousands of servers. This means that with one command, Facebook can ensure that thousands of servers are configured as needed, and in exactly the same way. Another great example is the company Riot Games, creators of the game League of Legends. They have servers running 24/7, all around the world, and they use Chef to automate the management of those servers.

As you know, manual efforts do not scale well, but automation does. With Chef, you have a clean and consistent way to automate the management of your infrastructure. Once you have the desired configuration written in code, it doesn’t matter how many servers you need to manage because the same code will be executed on all of them.

Get started with Chef: Chef architecture

Chef has three core components: Workstations, the Chef Server, and nodes. This is how the components work together: The Workstation is where you write the code that specifies how different nodes should be configured. And once that code is written and tested, it can be deployed to the Chef Server. Once the code is on the Chef Server you can instruct the nodes to download the latest code from the Chef Server, and then execute it.

Let’s drill down a little further into each component:

Workstation

The Workstation is where you write the code that specifies how different nodes should be configured. In Chef terminology, a Workstation is a computer where the Chef Development Kit is installed. It’s where you actually write and test the code that Chef uses to manage servers. It’s also used to interact with the Chef Server. The development kit contains everything required to develop with Chef. Part of the installation is a set of tools that are used to interact with the different components of Chef.
Some of the most important tools are:

The “chef” executable is used to generate different code templates, execute system commands under the context of the Chef Development Kit, as well as install gems into the development kit environment. (RubyGems is a package manager for Ruby, and the individual package is referred to as a gem.) So, the “chef” executable used to help with development related tasks.

The chef-client is a key component of Chef because it’s the driving force behind nodes. It is an executable that can also be run as a service, and it’s responsible for things like registering a node with the Chef Server, synchronizing code from the Chef Server to the node, and making sure the node that is running the chef-client is configured correctly, based on your code. The chef-client is a part of the development kit, in addition to being run on nodes. It can be used to configure any node it runs on. So, if you’re going to test your code, you’ll need the chef-client.

Ohai is required by the chef-client executable. Ohai gathers attributes about a node, such as the operating system and version, network, memory and CPU usage, hostnames, configuration info, and more. It is automatically run by the chef-client, and the information it gathers is accessible to you in code. Having that information allows you to do things like conditionally execute code, or set configuration file values. The attributes that Ohai collects are saved on the Chef server, which allows you to use them to search for nodes based on the values. For example, if you wanted to search for all nodes running inside of AWS, you could search based on the EC2 attributes that Ohai automatically collects, and saves to the Chef Server.

The “knife” is a versatile tool that serves multiple roles. It is used to interact with the Chef Server and includes tasks such as uploading code from a workstation, setting global variables, and a lot more. The knife is used for installing the chef-client on nodes. It can even provision cloud infrastructure.

Kitchen is an important tool that is used for testing the Chef code you develop. Testing is an important requirement for any configuration management software. Chef gives you the power to run commands across your entire infrastructure at the same time. This means that you could update thousands of servers at once. If you’re executing untested code, you run the risk of breaking some of the components of your infrastructure with the push of a button. Kitchen allows you to test your code against one or more virtual machines at the same time.

Berkshelf is a dependency management tool that allows you to download code shared by the Chef community. As you begin to develop with Chef, you’ll find yourself running into problems that have already been solved by other developers. Developers frequently share their work on the Chef Supermarket so you don’t have to develop the functionality for yourself.

Chef Server

Chef uses a client-server architecture, which means there are two components: a server and zero or more clients. The server component of the client-server architecture, in this case, is the Chef Server. It is a central hub where all of your automation code lives. It’s also responsible for knowing about all of the nodes that it manages. At its core, the Chef Server is a RESTful API that allows authenticated entities to interact with the different endpoints.

For the most part, you won’t need to interact with the API directly. At least not in the early stages while you’re still learning Chef. Instead, you will use one of the tools built around the API. There is a web-based management UI that allows you to interact with the API via a browser, or the chef-client, which is the way nodes interact with the API. They grab whatever they need from the server and then perform any configuration locally on the node itself to reduce the amount of work the server needs to perform. Finally, there is the knife command, which you may recall is used to interact with the Chef Server.

Nodes

Chef refers to clients generically as nodes. Because Chef is capable of managing all kinds of different devices, nodes are any type of machine capable of running the “chef-client” software. It could be a physical server, an on-premise or cloud-based virtual machine, a network device such as a switch or router, and even containers.

In order for Chef to configure a node, the chef-client needs to be installed on the node. The chef-client is responsible for making sure the node is authenticated and registered with the Chef server. The chef-client uses RSA public key-pairs to authenticate between a node and the Chef server. Once a node is registered with the Chef server, it can access the server’s data and configuration information.

Next Steps

So far, we’ve covered Chef at a high level. In our course how to get started with Chef we’ll dig deeper into the Chef Workstation to talk about chef-repo and Cookbooks, and we’ll show you how to create your first recipe, configure the Chef Server for basic development, and a lot more.

By the end of the course you should be able to:

  • Understand the use cases for Chef
  • Explain the Chef architecture
  • Describe the components of Chef
  • Create a simple cookbook

Getting Started with Chef is free for Cloud Academy subscribers. If you’re not already a Cloud Academy member, we invite you to try Cloud Academy with our free 7-day trial where you’ll have full access to Cloud Academy video courses, hands-on labs, and quizzes throughout the trial period.

Avatar

Written by

Cloud Academy Team

Related Posts

Avatar
Andrew Larkin
— August 13, 2019

Content Roadmap: AZ-500, ITIL 4, MS-100, Google Cloud Associate Engineer, and More

Last month, Cloud Academy joined forces with QA, the UK’s largest B2B skills provider, and it put us in an excellent position to solve a massive skills gap problem. As a result of this collaboration, you will see our training library grow with additions from QA’s massive catalog of 500+...

Read more
  • AWS
  • Azure
  • content roadmap
  • Google Cloud Platform
Avatar
Adam Hawkins
— August 9, 2019

DevSecOps: How to Secure DevOps Environments

Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the band...

Read more
  • AWS
  • cloud security
  • DevOps
  • DevSecOps
  • Security
Avatar
Stefano Giacone
— August 8, 2019

Test Your Cloud Knowledge on AWS, Azure, or Google Cloud Platform

Cloud skills are in demand | In today's digital era, employers are constantly seeking skilled professionals with working knowledge of AWS, Azure, and Google Cloud Platform. According to the 2019 Trends in Cloud Transformation report by 451 Research: Business and IT transformations re...

Read more
  • AWS
  • Cloud skills
  • Google Cloud
  • Microsoft Azure
Avatar
Andrew Larkin
— August 7, 2019

Disadvantages of Cloud Computing

If you want to deliver digital services of any kind, you’ll need to estimate all types of resources, not the least of which are CPU, memory, storage, and network connectivity. Which resources you choose for your delivery —  cloud-based or local — is up to you. But you’ll definitely want...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud Platform
Joe Nemer
Joe Nemer
— August 6, 2019

Google Cloud vs AWS: A Comparison (or can they be compared?)

The "Google Cloud vs AWS" argument used to be a common discussion among our members, but is this still really a thing? You may already know that there are three major players in the public cloud platforms arena: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)...

Read more
  • AWS
  • Google Cloud Platform
  • Kubernetes
Avatar
Stuart Scott
— July 29, 2019

Deployment Orchestration with AWS Elastic Beanstalk

If you're responsible for the development and deployment of web applications within your AWS environment for your organization, then it's likely you've heard of AWS Elastic Beanstalk. If you are new to this service, or simply need to know a bit more about the service and the benefits th...

Read more
  • AWS
  • elastic beanstalk
Avatar
Stuart Scott
— July 26, 2019

How to Use & Install the AWS CLI

What is the AWS CLI? | The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services and implement a level of automation. If you’ve been using AWS for some time and feel...

Read more
  • AWS
  • AWS CLI
  • Command line interface
Alisha Reyes
Alisha Reyes
— July 22, 2019

Cloud Academy’s Blog Digest: July 2019

July has been a very exciting month for us at Cloud Academy. On July 10, we officially joined forces with QA, the UK’s largest B2B skills provider (read the announcement). Over the coming weeks, you will see additions from QA’s massive catalog of 500+ certification courses and 1500+ ins...

Read more
  • AWS
  • Azure
  • Cloud Academy
  • Cybersecurity
  • DevOps
  • Kubernetes
Avatar
Stuart Scott
— July 18, 2019

AWS Fundamentals: Understanding Compute, Storage, Database, Networking & Security

If you are just starting out on your journey toward mastering AWS cloud computing, then your first stop should be to understand the AWS fundamentals. This will enable you to get a solid foundation to then expand your knowledge across the entire AWS service catalog.   It can be both d...

Read more
  • AWS
  • Compute
  • Database
  • fundamentals
  • networking
  • Security
  • Storage
Avatar
Adam Hawkins
— July 17, 2019

How to Become a DevOps Engineer

The DevOps Handbook introduces DevOps as a framework for improving the process for converting a business hypothesis into a technology-enabled service that delivers value to the customer. This process is called the value stream. Accelerate finds that applying DevOps principles of flow, f...

Read more
  • AWS
  • AWS Certifications
  • DevOps
  • DevOps Foundation Certification
  • Engineer
  • Kubernetes
Avatar
Vineet Badola
— July 15, 2019

AWS AMI Virtualization Types: HVM vs PV (Paravirtual VS Hardware VM)

Amazon Machine Images (AWS AMI) offers two types of virtualization: Paravirtual (PV) and Hardware Virtual Machine (HVM). Each solution offers its own advantages. When we’re using AWS, it’s easy for someone — almost without thinking —  to choose which AMI flavor seems best when spinning...

Read more
  • AWS
  • Hardware Virtual Machine
  • Paravirtual
  • Virtualization
Avatar
Stuart Scott
— July 2, 2019

AWS Machine Learning Services

The speed at which machine learning (ML) is evolving within the cloud industry is exponentially growing, and public cloud providers such as AWS are releasing more and more services and feature updates to run in parallel with the trend and demand of this technology within organizations t...

Read more
  • Amazon Machine Learning
  • AWS
  • AWS re:Invent
  • Machine Learning