hands-on lab

Customizing Large Language Models Using Ollama

Up to 1h
Get guided in a real environmentPractice with a step-by-step scenario in a real, provisioned environment.
Learn and validateUse validations to check your solutions every step of the way.
See resultsTrack your knowledge and monitor your progress.


Ollama is a tool for creating, customizing, and running Large Language Models. It can be used via the command line, a web interface, or an API. By using Ollama, you can run a model locally which may be advantageous if you do not want to share your data with an LLM service provider.

Learning how to start, use, and configure Ollama is a valuable skill for anyone looking to integrate Large Language Models or Generative AI into their applications or workflows.

In this hands-on lab, you will connect to a virtual machine and use Ollama to run and customize a model.

Learning objectives

Upon completion of this beginner-level lab, you will be able to:

  • Start the Ollama server
  • Create and implement an Ollama model file
  • Use the Ollama API
  • Analyze log entries with Ollama

Intended audience

  • Anyone looking to learn about Large Language Models
  • Cloud Architects
  • Data Engineers
  • DevOps Engineers
  • Machine Learning Engineers
  • Software Engineers


Familiarity with the following will be beneficial but is not required:

  • Ollama
  • Large Language Models
  • The Bash shell

The following content can be used to fulfill the prerequisites:

Environment before

Environment after

About the author

Learning paths

Andrew is a Labs Developer with previous experience in the Internet Service Provider, Audio Streaming, and CryptoCurrency industries. He has also been a DevOps Engineer and enjoys working with CI/CD and Kubernetes.

He holds multiple AWS certifications including Solutions Architect Associate and Professional.

Covered topics

Lab steps

Logging In to the Amazon Web Services Console
Connecting to the Virtual Machine Using EC2 Instance Connect
Creating an Ollama Model File
Customizing an Ollama Model