A basic explanation of the new AI bot called ChatGPT

U.S.-based AI research company OpenAI, the San Francisco company behind the text-to-image creation tool named DALL-E, has created a chatbot that responds to user-submitted queries. The model was trained using reinforcement learning from human feedback. ChatGPT (GPT stands for “generative pre-trained transformer”) shows how far artificial intelligence—particularly AI text generators—has come. Because it remembers what you've written or said, the interaction has a dynamic conversational feel. That makes it different from other chatbots, which are static. It could be the basis for a medical chatbot to answer patient questions about very specific symptoms or serve as a personalized therapy bot.

Give the software a prompt — and it creates articles, even poetry. It writes code, too. And explains the code. Or makes correction to errors in code. GPT-3 came before it. Both are generative models. That means they are trained to predict the next word in a sentence. It’s like a topnotch autocompletion tool. What separates ChatGPT from GPT-3 is that ChatGPT goes beyond predicting the next word to also follow the users instructions. Training with examples of human conversations has made the experience with the bot more familiar to users.

ChatGPT is being used to rewrite literary classics, create a bible song about ducks, string cheese sonnet, explain scientific concepts, explain how to remove a peanut butter sandwich from a VCR in the style of the King James Bible, or write a story about a fictitious Ohio-Indiana war. The New York Times gushes, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.” Some tech observers predict it could one day replace Google.

But the software has limitations: such as not having information about 2022 because it doesn’t “crawl” the web like Google in search of new information, it can spit out “plausible-sounding but incorrect answers,” and while it is designed to not provide inappropriate content as creators have taken steps to avoid racist, sexist and offensive outputs that have popped out of other chatbot, there is likely to be some hiccups in that process.

Some warn about its potential abuse—blurring the lines between original writing and plagiarism.

Mike Sharples, a U.K. professor, says such technology “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” 

Ars Technica reporter Benj Edwards writes:

"[I]t’s possible that OpenAI invented history’s most convincing, knowledgeable and dangerous liar — a superhuman fiction machine that could be used to influence masses or alter history." 

Decide for yourself whether we’re on the cusp of new creativity or massive fraud. Create a free account using your email here. Or try the Twitter bot if you’d prefer not to sign up.

Articles about ChatGPT: 

New AI chatbot is scary good – Axios

OpenAI’s new chatbot ChatGPT could be a game-changer for businesses – Tech Monitor  

Google is done. Here’s why OpenAI’s ChatGPT Will Be a Game Changer – Luca Petriconi

The College Essay Is Dead – The Atlantic

The Brilliance and Weirdness of ChatGPT – New York Times

ChatGPT Is Dumber Than You Think - The Atlantic

The Lovelace Effect – AI generated texts should lead us to re-value creativity in academic writing - London School of Economics

Hugging Face GPT-2 Output Detector

AI is finally good at stuff, and that’s a problem - Vox

ChatGPT: How Does It Work Internally? - Toward AI

Your Creativity Won’t Save Your Job From AI - The Atlantic

Could your public photos be used in an AI deepfake? - Ars Technica

API access is expected early in 2023, so companies can create products based on the software. Later next year, rumors say OpenAI will introduce an even better AI model named GPT-4.