top of page
Generative AI terminology guide

The Business Leader's Comprehensive Guide: Unraveling Generative AI from A to Z

Are you noticing people in your network freely peppering their conversations with terms like "generative AI", "large language models", or "deep learning"?

 

If it all sounds like technical jargon, don't panic! GetImpaqt.GPT has you covered. We've compiled a handy guide, an ABC of generative AI, to unravel the mysteries of this groundbreaking technology that is changing the way businesses function. 

Published: April 18th. 2023.
Last update: July 14th, 2023.

GetImpaqt.GPT list of key terminologies is crafted to enable everyone on your team, regardless of their tech prowess, to grasp the tremendous potential of generative AI. Each term is articulated in a manner that describes its effect on your customer relationships as well as your team dynamics.

​

How about a fun peek into the practicality of generative AI?

 

To give you a taste of its real-world application, we employed generative AI in the creation of this article. Subject matter experts provided their insights on critical terms for this glossary. Of course, a human touch was needed to polish it to perfection, but it was a massive time-saver nonetheless.

​

 we'll be regularly updating this generative AI glossary to keep you at the front of this technological revolution.

Image by Jean-Philippe Delberghe

Stay in the know

Thanks for subscribing!

=

Using historic information to make a prediction about the future.

​

  • This term is often used as AI

AI: Core terms

Algorithm

=

 A branch of computer science that aims to create machines or software that can mimic human intelligence:

Learning from experiences | Understanding complex content | Recognizing patterns | Making decisions | Carrying out tasks that require human-like understanding. 

​

  • AI can help you by predicting what is likely a next action, based on what's been done in the past.

Artificial intelligence (AI)

=

Computer program that mimics the way human brains process information. Our brains have billions of neurons connected together, and an ANN (also referred to as a “neural network”) has lots of tiny processing units working together.

​

"It’s like a team all working to solve the same problem. Every team member does their part, then passes their results on. At the end, you get the answer you need. With humans and computers, it’s all about the power of teamwork."

Artificial neural network
Augmented intelligence

Consider augmented intelligence as a harmonious fusion of human expertise and computer capabilities, enabling the attainment of optimal outcomes.

 

Computers excel in processing vast amounts of data and performing intricate calculations rapidly, while humans possess a remarkable ability to comprehend context, establish correlations amidst incomplete information, and make instinct-driven decisions. Augmented intelligence harnesses the synergy between these two proficiencies

​

 It’s not about computers replacing people or doing all the work for us. It’s more like hiring a really smart, well-organized assistant. 

=

 A powerful technology within the field of artificial intelligence that enables computers to learn and make intelligent decisions on their own. It mimics the structure and functioning of the human brain, allowing machines to process and understand complex patterns and information.

​

Think of deep learning as a way for computers to automatically learn and recognize important features and patterns in large amounts of data, just like humans do. By using multiple layers of interconnected "neurons," deep learning models can extract and transform data at different levels of abstraction, gradually gaining a deeper understanding of the information.

​

Deep learning can be applied in the context of customer support to alleviate the pain of long wait times and enhance customer satisfaction. By implementing a deep learning-powered chatbot, businesses can provide instant and accurate responses to customer inquiries, reducing response times and freeing up human agents to focus on more complex issues, resulting in improved customer experiences and increased operational efficiency.

Deep learning

=

A subset of artificial intelligence that involves training models to create original content, such as images, text, or music. By learning patterns from vast datasets, these models can generate new and unique content that resembles the training data. It has applications in various fields, from creative endeavors like art and music to data augmentation and personalized content generation. 

​

In the business context, think about how much time is currently spend getting a specific task done (like creating content, text and visuals, for an marketing email)

As follow up question, how much money is attached to that time? 

​

Confronting?

=

AI-based software tool that creates new content from a request or input. It will learn from any supplied training data, then create new information that mimics those patterns and characteristics.

 

ChatGPT by OpenAI and Bard by Google are well-known example of a text-based generator.

Generator

=

A type of artificial intelligence language model developed by OpenAI.

 

GPT models are trained on a massive dataset (billions of datapoints) of text and code, and they can be used to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

GPT

=

A scoring system that scores the probability of what you are about to do.

​

This is how computers can learn new things without being programmed to do them.
For example, when teaching a child to identify shapes, you show them pictures and provide feedback.

 

As they see more examples and receive feedback, they learn to classify shapes based on unique characteristics. Similarly, machine learning models learn from labeled data to make accurate predictions and decisions. They generalize and apply their knowledge to new examples, just as humans do.

Machine learning

=

A field of computer science that deals with the interaction between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.

 

The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

NLP

=

A type of deep learning model, and are especially useful for processing language. They’re really good at understanding the context of words in a sentence because they create their outputs based on sequential data (like an ongoing conversation), not just individual data points (like a sentence without context).

 

The name “transformer” comes from the way they can transform input data (like a sentence) into output data (like a translation of the sentence).

Transformer
Download your terminology copy

Success! Keep an eye out on your Email.

AI: Training & Learning

=

a machine learning model that is used to distinguish between real and fake data. The discriminator is trained on a dataset of real data, and it learns to identify the features that distinguish real data from fake data.

​

In a business context, this can be used as

​

  • Content moderation: Discriminators can be used to identify and remove harmful or inappropriate content from online platforms. For example, discriminators can be used to identify and remove images or videos of child abuse, hate speech, or terrorist propaganda.

  • Fraud detection: Discriminators can be used to detect fraudulent transactions. For example, discriminators can be used to identify credit card transactions that are likely to be fraudulent.

  • Image generation: Discriminators can be used to generate realistic images that can be used for a variety of purposes, such as product visualization, marketing, and advertising.

  • Text generation: Discriminators can be used to generate realistic text that can be used for a variety of purposes, such as customer service chatbots, product descriptions, and marketing copy.

  • Music generation: Discriminators can be used to generate realistic music that can be used for a variety of purposes, such as background music for videos, commercials, and games.

Discriminator in GAN

=

A type of machine learning model that is used to generate new data. GAN's consist of two neural networks: a generator and a discriminator.

​

The generator is responsible for creating new data, while the discriminator is responsible for distinguishing between real data and data that was created by the generator.

GAN

=

When generative AI analyzes the content we give it, but comes to an wrong conclusion and produces new content that doesn’t match to reality. This is called hallucination

Hallucination

A large language model (LLM) is a type of artificial intelligence (AI) model that is trained on a massive dataset of text and code. LLMs are typically trained using a technique called self-supervised learning, which means that the model is trained to predict the next word in a sequence of words.

​

Think of an LLM like a really smart conversation partner that can create human-sounding text based on a given prompt. Some LLMs can answer questions, write essays, create art, generate code, analyse your advertising data in seconds. The possibilities are endless

LLM

This is a program that’s been trained to recognize patterns in data. You could have a model that predicts the weather, translates and localizes texts, identifies pictures of panda's, etc.

Just like a model race-car is a smaller, simpler version of a real race-car, an AI model is a mathematical version of a real-world process.

Model

=

Technique for improving the performance of large language models (LLMs) on specific tasks. It involves carefully crafting the prompts that are given to the LLMs, in order to guide them towards the desired output.

​

Prompts are short pieces of text that are given to LLMs to help them understand the task at hand. They can be as simple as a single sentence, or they can be more complex. The goal of prompt engineering is to write prompts that are clear, concise, and informative, and that will help the LLM to produce the desired output.

=

The process of determining the emotional tone of a piece of text. It is a type of natural language processing (NLP) that is used to identify and extract subjective information from text.

​

For example, if you're a company that sells shoes, you could use sentiment analysis to track how people are talking about your shoes on social media.

 

If you see that there's a lot of negative sentiment, you might want to investigate further to see what's causing the negativity. You could also use sentiment analysis to analyze customer feedback to see what people like and dislike about your products.

 

This information could help you improve your products and services to better meet the needs of your customers.

=

Type of machine learning where the model is trained on labeled data. This means that the data that the model is trained on has already been classified, so the model knows what the correct output should be for a given input.

​

Think of supervised learning as a teacher-student relationship. The teacher provides the student with questions and the correct answers. The student studies these, and over time, learns to answer similar questions on their own.

​

Supervised learning is really helpful to train systems that will recognize images, translate languages, or predict likely outcomes. For example, a supervised learning model could be trained to recognize images of dogs by being shown a set of images of dogs with the label "dog" and a set of images of other animals with the label "not a dog." Once the model is trained, it can be used to identify dogs in new images.

Supervised learning

Think of unsupervised learning as a child learning to play with blocks. The child does not have any instructions, but they learn by trial and error to stack the blocks, build towers, and make shapes.

​

Unsupervised learning is a type of machine learning where the model is trained on unlabeled data. This means that the data that the model is trained on does not have any labels, so the model does not know what the correct output should be for a given input.

​

Unsupervised learning is really helpful to train systems that will cluster data, identify patterns, or reduce dimensionality.

 

For example, an unsupervised learning model could be trained to cluster images of animals by being shown a set of images of animals without any labels. The model would learn to identify the different clusters of animals, such as dogs, cats, and birds. Once the model is trained, it can be used to identify new images of animals and assign them to the correct cluster.

Unsupervised learning

In machine learning, validation is a step used to check how well a model is doing during or after the training process. The model is tested on a subset of data (the validation set) that it hasn’t seen during training, to ensure it’s actually learning and not just memorizing answers.

​

It’s like a pop quiz for AI in the middle of the semester. 

Validation

The zone of proximal development (ZPD) is a concept in education and psychology that refers to the difference between what a learner can do on their own and what they can do with the help of a more experienced person.

​

In machine learning, the ZPD refers to the difference between what a machine learning model can do on its own and what it can do with the help of a human.

ZPD

AI: Ethics

An ethical AI maturity model is a framework that organizations can use to assess their progress in developing and deploying ethical AI systems. These models typically define a set of ethical principles that AI systems should adhere to, and then provide a roadmap for organizations to follow as they work to implement those principles.

Ethical AI Maturity Model

Explainable AI (XAI) is a field of research that seeks to make AI systems more explainable to humans. This is important because AI systems are becoming increasingly complex, and it can be difficult for humans to understand how they work and make decisions.

​

Remember how at school the teacher asked you to share the steps you've taken to get to the result?  

​

 That’s what we’re asking AI to do. Explainable AI (XAI) should provide insight into what influenced the AI’s results, which will help users to interpret (and trust!) its outputs.

Explainable AI (XAI)

Machine learning bias is a type of bias that can occur in machine learning models. It occurs when the model is trained on data that is biased, and this bias is then reflected in the model's predictions.

​

Think of “garbage in, garbage out”. 

 

Machine learning bias is just a turbocharged AI version of that. When computers are fed biased information, they make biased decisions. 

Machine learning bias
Download your Ethical AI Maturity model

Keep an eye out on your email!

bottom of page