Navigating the Vocabulary of Generative AI Series (1 of 3)

Vocabulary of AI (1 of 3)

If you have made it to this page then you may be struggling with some of the language and terminology being used when discussing Generative AI, don’t worry, you are certainly not alone! By the end of this 3 part series, you will have an understanding of some of the most common components and elements of Gen AI allowing you to be able to follow and join in on those conversations that are happening around almost every corner within your business on this topic.

Gen AI is already rapidly changing our daily lives and will continue to do so as the technology is being adopted at an exponential rate. Those within the tech industry need to be aware of the fundamentals and understand how it fits together, and to do this you need to know what a few components are. You can easily become lost in a conversation if you are unaware of what a foundation model (FM), large language model (LLM), or what prompt engineering is and why it’s important.  

In this blog series, I want to start by taking it back to some of the fundamental components of artificial intelligence (AI) and looking at the subset of technologies that have been derived from AI and then dive deeper as we go.

If you want to deep dive into AI, Cloud Academy has a whole dedicated section in its training library. Also, if you’re looking to channel the power of AI in your business, request a free demo today!

Artificial intelligence (AI)

AI can be defined as the simulation of our own human intelligence that is managed and processed by computer systems.  AI can be embedded as code within a small application on your phone, or perhaps at the other end of the scale, implemented within a large-scale enterprise application hosted within the cloud and accessed by millions of customers.  Either way, it has the capabilities to complete tasks and activities that may have previously required human intelligence to complete.  

Machine Learning (ML)

Machine learning is a subset of AI, and is used as a means to enable computer-based systems to be taught based upon experience and data using mathematical algorithms.  Over time, performance is improved and accuracy is increased as it learns from additional sampled data enabling patterns to be established and predictions to be made.  This creates an-going cycle which enables ML to learn, grow, evolve and remodel without human invention.

Artificial Neural Network (ANN)

Neural networks are a subset of Machine Learning that are used to instruct and train computers to learn how to develop and recognize patterns using a network designed not dis-similar to that of the human brain. Using a network consisting of complex and convoluted layered and interconnected artificial nodes and neurons, it is capable of responding to different input data to generate the best possible results, learning from mistakes to enhance its accuracy in delivering results.  

Deep Learning (DL)

Deep learning uses artificial neural networks to detect, identify, and classify data by analysing patterns, and is commonly used across sound, text, and image files.  For example, it can identify and describe objects within a picture, or it can transcribe an audio file into a text file.  Using multiple layers of the neural network, it can dive ‘deep’ to highlight complex patterns using supervised, unsupervised, or semi-supervised learning models

Generative AI (GAI)

Generative AI, or Gen AI is a subset of deep learning and refers to models that are capable of producing new and original content that has never been created before, this could be an image, some text, new audio, code, video and more.  The creation of this content is generated using huge amounts of training data within foundation models, and as a result it creates output that is similar to this existing data, which could be mistaken to have been created by humans.

Foundation Model (FM)

Foundation models are trained on monumental unlabeled broad data sets and underpin the capabilities of Gen AI, this makes them considerably bigger than traditional ML models which are generally used for more specific functions.  FMs are used as the baseline starting point for developing and creating models which can be used to interpret and understand language, converse in conversational messaging, and also create and generate images.  Different foundation models can specialise in different areas, for example the Stable Diffusion model by Stability AI is great for image generation, and the GPT-4 model is used by ChatGPT for natural language.  FMs are able to produce a range of outputs based on prompts with high levels of accuracy.  

Large Language Model (LLM)  

Large language models are used by generative AI to generate text based on a series of probabilities, enabling them to predict, identify and translate consent.  Trained on transformer models using billions of parameters, they focus on patterns and algorithms that are used to distinguish and simulate how humans use language through natural language processing (NLP).  LLMs are often used to summarise large blocks of text, or in text classification to determine its sentiment, and to create chatbots and AI assistants.

Natural Language Processing (NLP)

NLP is a discipline that focuses on linguistics and provides the capacity for computer based systems to understand and interpret how language is used in both written and verbal forms, as if a human was writing or speaking it.  Natural language understanding (NLU), looks at the understanding of the sentiment, intent, and meaning in language, whilst natural language generation (NLG) focuses on the generation of language, both written and verbal, allowing text-to-speech and speech-to-text output.

Transformer Model

A transformer model is used within deep learning architecture and can be found supporting the root of many large language models due to its ability to process text using mathematical techniques in addition to capturing the relationships between the text. This long-term memory allows the model to transfer text from one language to another. It can also identify relationships between different mediums of data, allowing applications to ‘transform’ text (input), into an image (output).  

Generative Pretrained Transformer (GPT)

Generative pre-trained transformers use the Transformer model based upon deep learning to create human-like capabilities to generate content primarily using text, images, and audio using natural language processing techniques.  Used extensively in Gen AI use cases such as text summarization, chatbots, and more.  You will likely have heard of ChatGPT, which is a based on a generative pretrained transformer model.

In my next post I continue to focus on AI, and I will be talking about the following topics:

  • Responsible AI
  • Labelled Data
  • Supervised learning
  • Unsupervised learning
  • Semi-supervised learning
  • Prompt engineering
  • Prompt chaining
  • Retrieval Augmented Generation (RAG)
  • Parameters
  • Fine Tuning
Cloud Academy