Since its release, ChatGPT has shocked the world with its capabilities. In this course, you will be presented with an introduction to some of the terms used to describe the workings of ChatGPT. This course takes a unique approach by asking ChatGPT what these terms mean.
- Learn what prompts are and how they start the process of using ChatGPT
- Learn what completions are
- Learn what tokens are and the limitations that are set by the token system
This course is intended for anyone who wants to learn about ChatGPT
- This course requires no prior knowledge
ChatGPT prompts, completions, and tokens. Artificial intelligence has been featured in the news recently mainly due to ChatGPT. In simple terms, ChatGPT is a AI driven chat bot that allows you to have human-like conversations. Microsoft being an investor in Open AI is rapidly moving to integrate ChatGPT and other AI features into its existing products. For example, Bing search, and other companies are also moving rapidly to integrate ChatGPT into their software offerings. And I thought this would be a good opportunity to cover some terminology. I will begin with prompts. And at first I was going to give my explanation, but I thought maybe I should just ask ChatGPT. So, down below I will type what is a ChatGPT prompt? I will speed up the playback for the response. And here's the very first sentence. A ChatGPT prompt is a sentence or phrase that is presented to the ChatGPT language model to generate a response. In layman terms, a prompt is an input to the model. It is the question asked to ChatGPT. You are prompting the model to gain a response. So, what is a ChatGPT completion?
Well, I'm going to ask ChatGPT again. Once again focusing on the first sentence, a ChatGPT completion is the response generated by the ChatGPT language model when presented with a prompt. So, the completion is the answer to your question. And lastly, what I feel is the most important thing to understand here. What is a ChatGPT token? And here's the answer. A ChatGPT token is a unit of text that the ChatGPT language model uses to understand and generate language. Now, this one is a little bit obscure. The line that actually gives the most meaning is the 4th sentence.
The model uses tokens to represent words, phrases, and other language elements, then generates a sequence of tokens to form a coherent response to a given prompt. When using ChatGPT, text is represented by tokens. To give an estimate, 1000 tokens averages around 750 words. Now ChatGPT has this tokenizer tool that gives you an estimate of how many tokens a block of text may use. I'm going to paste in a 1500-word essay that I found online and now it sees that this essay is around 6100 characters and 1200 tokens. And understanding tokens is important because it also represents a limitation when using ChatGPT. Looking at the prompt and completion on the screen, each chat session, and what I mean by this, is each prompt and the prompts corresponding completion has a combined maximum amount of 2048 tokens for version 3.
So, maybe it's a good idea to ask ChatGPT what happens when ChatGPT runs out of tokens for completion? And once this response is done, I'm going to zoom in on one specific line. In this case, ChatGPT will not be able to provide any further output, and this limitation of tokens creates a problem. If I ask a prompt and do not give enough of a question to train the model, I may not get back a great completion. If I create a prompt that is too long, ChatGPT might not have enough tokens to give back the full answer. Now, if you are a paying subscriber, you do get access to a limited version of GPT version 4. This has a larger token capacity, but that comes with the drawback of also having a slower completion time, for now. And I say for now because the first public release of ChatGPT version 3.5 occurred in November of 2022. Version 4 was released in the middle of March 2023, with version 4.5 on the horizon, expected to be released fall of 2023. And as these changes occur, expect Cloud Academy to guide you along the way. Thanks for watching.
Farish has worked in the EdTech industry for over six years. He is passionate about teaching valuable coding skills to help individuals and enterprises succeed.
Previously, Farish worked at 2U Inc in two concurrent roles. Farish worked as an adjunct instructor for 2U’s full-stack boot camps at UCLA and UCR. Farish also worked as a curriculum engineer for multiple full-stack boot camp programs. As a curriculum engineer, Farish’s role was to create activities, projects, and lesson plans taught in the boot camps used by over 50 University partners. Along with these duties, Farish also created nearly 80 videos for the full-stack blended online program.
Before 2U, Farish worked at Codecademy for over four years, both as a content creator and part of the curriculum experience team.
Farish is an avid powerlifter, sushi lover, and occasional Funko collector.