The Most Important Basic Generative AI Terms to Know  

Algorithms –  Direct, specific instructions for computers created by a human through coding that tells the computer how to perform a task.

The code follows the algorithmic logic of “if”, “then”, and “else.”  An example of an algorithm would be:         

  • IF the customer orders size 13 shoes,         

  • THEN display the message ‘Sold out, Sasquatch!’;         

  • ELSE ask for a color preference.     

Besides rule-based algorithms, there are machine-learning algorithms used to create AI. In this case, the data and goal is given to the algorithm, which works out for itself how to reach the goal.

There is a popular perception that algorithms provide a more objective, more complete view of reality, but they often will simply reinforce existing inequities, reflecting the bias of creators and the materials used to train them.

Artificial Intelligence (AI) – Basically, AI means “making machines intelligent”, so they can make some decisions on their own without the need for any human interference.

The phrase was coined in a research proposal written in 1956. The current excitement about the field was kick-started in 2012 by an online contest called the ImageNet Challenge, in which the goal was getting computers to recognize and label images automatically.

Big Data – This is data that’s too big to fit on a single server.

Typically, it is unstructured and fast-moving. In contrast, small data fits on a single server, is already in structured form (rows and columns), and changes relatively infrequently. If you are working in Excel, you are doing small data. Two NASA researchers (Michael Cox and David Ellsworth) first wrote in a 1997 paper that when there’s too much information to fit into memory or local hard disks, “We call this the problem of big data.”

Generative AI – Artificial intelligence that can produce content (text, images, audio, video, etc.) such as ChatGPT.  

It operates similarly to the “type ahead” feature on smartphones that makes next-word suggestions. Gen AI is based on the particular content it was trained on (exposed to).

GPT – The “GPT” in ChatGPT stands for Generative Pre-Trained Transformer. 

Hallucinations – when an LLM provides responses that are inaccurate responses or not based on facts. 

Hallucination – the AI saying things that sound plausible and authoritative but simply aren’t so.

Large Language Models (LLMs) – AI trained on billions of language uses, images and other data. It can predict the next word or pixel in a pattern based on the user’s request. ChatGPT and Google Bard are LLMs.

The kinds of text LLMs can parse out:

  • Grammar and language structure.

  • How a word is used in language (noun, verb, etc.).

  • Word meaning and context (ex: The word green may mean a color when it is closely related to a word like “paint,” “art,” or “grass.”

  • Proper names (Microsoft, Bill Clinton, Shakira, Cincinnati).

  • Emotions (indications of frustration, infatuation, positive or negative feelings, or types of humor).

Machine learning (ML) – AI that spots patterns and improves on its own. 

An example would be algorithms recommending ads for users, which become more tailored the longer it observes the users‘ habits (someone’s clicks, likes, time spent, etc.). 

Data scientists use ML to make predictions by combining ML with other disciplines (like big data analytics and cloud computing) to solve real-world problems. However, while this process can uncover correlations between data, it doesn’t reveal causation. It is also important to note that the results provide probabilities, not absolutes.

Neural Network – In this type of machine learning computers learn a task by analyzing training examples. It is modeled loosely on the human brain—the interwoven tangle of neurons that process data in humans and find complex associations.

Neural networks were first proposed in 1944 by two University of Chicago researchers (Warren McCullough and Walter Pitts) who moved to MIT in 1952 as founding members of what’s sometimes referred to as the first cognitive science department. Neural nets were a major area of research in both neuroscience and computer science until 1969. The technique then enjoyed a resurgence in the 1980s, fell into disfavor in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips. 

Open Source AI – When the source code of an AI is available to the public, it can be used, modified, and improved by anyone. Closed AI means access to the code is tightly controlled by the company that produced it.

The closed model gives users greater certainty as to what they are getting, but open source allows for more innovation. Open-source AI would include Stable Diffusion, Hugging Face, and Llama (created by Meta). Closed Source AI would include ChatGPT and Google’s Bard.

Prompts – Instructions for an AI. It is the main way to steer the AI in a particular direction, indicate intent, and offer context. It can be time-consuming if the task is complex.  

Prompt Engineer – An advanced user of AI models, a prompt engineer doesn’t possess special technical skills but is able to give clear instructions so the AI returns results that most closely match expectations.

This skill can be compared to a psychologist who is working with a client who needs help expressing what they know. 

Red Teaming  –  Testing an AI by trying to force it to act in unintended or undesirable ways, thus uncovering potential harms.

The term comes from a military practice of taking on the role of an attacker to devise strategies.  

While some of these definitions are a bit of an oversimplification, they will point the beginner in the right direction. -Stephen Goforth