hands-on lab

Building a PDF RAG Chatbot Powered by LangChain and Amazon Bedrock

Intermediate
Up to 1h 30m
9
Get guided in a real environmentPractice with a step-by-step scenario in a real, provisioned environment.
Learn and validateUse validations to check your solutions every step of the way.
See resultsTrack your knowledge and monitor your progress.

Description

Chatbots are useful tools for providing customers and employees access to internal documents or information. A chatbot backed by a large language model (LLM) can improve the user experience and reduce the need for human intervention. However, LLMs are often limited in their answers as they rely on pre-trained data and may not provide contextually relevant answers.

To enhance the capabilities of a chatbot and address certain limitations of LLMs, businesses can use the Retrieval-Augmented Generation (RAG) technique. RAG combines retrieval-based and generative AI models to provide more accurate and contextually relevant answers to user prompts. Embeddings represent the content of documents and user prompts as a vector store, and these embeddings are passed to an LLM to generate answers based on the additional context provided.

In this lab, you will deploy a PDF chatbot application that uses retrieval-augmented generation to answer prompts based on embeddings generated from PDF documents. The chatbot will leverage the LangChain framework and Amazon Bedrock to generate embeddings and answers. The Streamlit framework will be used to render the chatbot's user interface. You will configure and deploy the application as an Amazon ECS Service and interact with the chatbot to test its functionality.

Note: This lab utilizes a file embedding solution covered in a separate lab. It is recommended to complete the Embedding Documents With LangChain and Amazon Bedrock lab before starting this lab.

Learning objectives

Upon completion of this intermediate-level lab, you will be able to:

  • Employ the Retrieval-Augmented Generation (RAG) technique to generate answers to questions based on embeddings
  • Configure and deploy a PDF chatbot application to an Amazon ECS Service

Intended audience

  • Candidates for the AWS Certified Machine Learning Specialty certification
  • Cloud Architects
  • Software Engineers

Prerequisites

Familiarity with the following will be beneficial but is not required:

  • Amazon Bedrock
  • AWS Fargate for Amazon ECS
  • Amazon Simple Storage Service (S3)
  • AWS Serverless Application Model (SAM)
  • LangChain
  • Streamlit

The following content can be used to fulfill the prerequisites:

Environment before

Environment after

About the author

Avatar
Jun Fritz, opens in a new tab
Cloud Labs Developer
Students
40,940
Labs
111
Courses
1
Learning paths
6

Jun is a Cloud Labs Developer with previous experience as a Software Engineer and Cloud Developer. He holds the AWS Certified Solutions Architect and DevOps Engineer Professional certifications. He also holds the AWS Certified Solutions Architect, Developer, and SysOps Administrator Associate certifications. 

Jun is focused on giving back to the growing cloud community by sharing his knowledge and experience with students and creating engaging content. 

Covered topics

Lab steps

Logging In to the Amazon Web Services Console
Setting up the AWS SAM CLI
Deploying the PDF Embedding Solution
Stepping Through the PDF Chatbot Application Code
Launching the Streamlit Application Using AWS Fargate
Interacting With the PDF RAG Chatbot