Running Local Large Language Models Using Ollama

7m 19s

This lesson teaches how to set up Ollama to run large language models(LLMs) locally. LLMs are a foundational component of Generative AI tools such as ChatGPT.

Learning Objectives

  • Learn how to download and run a LLM using Ollama

  • How to create a custom model based on an existing LLM

Intended Audience

  • This lesson is intended for anyone who wants to learn how to run LLMs locally


  • While there may be hardware limitations, this lesson has no prerequisites

About the Author
Farish Kashefinejad, opens in a new tab
Full-Stack Development Content Creator
Learning paths

Farish has worked in the EdTech industry for over six years. He is passionate about teaching valuable coding skills to help individuals and enterprises succeed.

Previously, Farish worked at 2U Inc in two concurrent roles. Farish worked as an adjunct instructor for 2U’s full-stack boot camps at UCLA and UCR. Farish also worked as a curriculum engineer for multiple full-stack boot camp programs. As a curriculum engineer, Farish’s role was to create activities, projects, and lesson plans taught in the boot camps used by over 50 University partners. Along with these duties, Farish also created nearly 80 videos for the full-stack blended online program.

Before 2U, Farish worked at Codecademy for over four years, both as a content creator and part of the curriculum experience team.

Farish is an avid powerlifter, sushi lover, and occasional Funko collector.

Covered Topics