Plagiarism & False Data in Academic Papers

There are countless credible accusations of (academic) misconduct that go uncorrected; I myself have published articles challenging the integrity of hundreds of papers. The majority of them have not been retracted, corrected or even remarked upon. I would wager that most reasonably large universities (my own included) have faculty members who are known to have plagiarized, fabricated, falsified, claimed undue credit, hidden financial conflicts of interest or misbehaved in numerous other ways and who have seemingly gone unpunished."

New York University professor Charles Seife writing in the New York Times

12 Articles on Cheating with AI & AI Detectors

The Trouble With AI Writing Detection – Inside Higher Ed

College application season is here. So is the struggle to find out if AI wrote students’ essays – Cal Matters 

If using ChatGPT to write essays becomes widespread, those students who elect not to use it, who prefer to do the work themselves, may suffer a penalty for doing so. – Chronicle of Higher Ed

Results of a new survey flip the early narrative on ChatGPT—that students would rush to use it to cheat on assignments and that teachers would scramble to keep up—on its head. Half of students, ages 12-18, said they have never used ChatGPT. – Ed Week

OpenAI debates when to release its AI-generated image detector – Tech Crunch

Universities Rethink Using AI Writing Detectors to Vet Students’ Work – Bloomberg 

Identifying AI’s flaws motivates students and helps them build confidence, which can discourage cheating. Pointing out where it still really messes up is very powerful for empowering students to see their own strengths as human thinkers. – Chronicle of Higher Ed

Students cheat out of desperation so one professor will give multi-level assignments that force students to submit papers at various stages to keep track of their progress. – Yahoo News

The AI Detection Arms Race Is On And college students are developing the weapons, quickly building tools that identify AI-generated text—and tools to evade detection. – Wired

Simply leaving it up to students to decide whether they’re going to do the work, without further comment or intervention or negative sanction from me, is a failure of pedagogy. – Chronicle of Higher Ed

AI detectors have low efficiency, and simple modifications can allow even the most robust detectors to be easily bypassed. – Science Direct 

Suspicion, Cheating & Bans: AN Hits America's Schools (podcast) – New York Times

8 good quotes about students cheating with AI   

Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate? Chronicle of Higher Ed 

What about students cheating by using ChatGPT instead of doing their own writing? The thing about technology is that it is interfering with the very weak proxies we have of measuring student learning, namely homework and tests. (Generative AI) is just another reminder that it’s actually really hard to know how much someone has learned something, and especially if we’re not talking to them directly but relying on some scaled up automated or nearly automated system to measure it for us. MathBabe Cathy O’Neil

Sometimes, though, professors who felt they had pretty strong evidence of AI usage were met with excuses, avoidance, or denial. Bridget Robinson-Riegler, a psychology professor at Augsburg University, in Minnesota, caught some obvious cheating (one student forgot to take out a reference ChatGPT had made to itself) and gave those students zeros. But she also found herself having to give passing grades to others even though she was pretty sure their work had been generated by AI (the writings were almost identical to each other). Chronicle of Higher Ed 

As professors of educational psychology and educational technology, we’ve found that the main reason students cheat is their academic motivation. The decision to cheat or not, therefore, often relates to how academic assignments and tests are constructed and assessed, not on the availability of technological shortcuts. When they have the opportunity to rewrite an essay or retake a test if they don’t do well initially, students are less likely to cheat. The Conversation

Lorie Paldino, an assistant professor of English and digital communications at the University of Saint Mary, in Leavenworth, Kan., described how she asked one student, who had submitted an argument-based research essay, to bring to her the printed and annotated articles they used for research, along with the bibliography, outline, and other supporting work. Paldino then explained to the student why the essay fell short: It was formulaic, inaccurate, and lacked necessary detail. The professor concluded with showing the student the Turnitin results and the student admitted to using AI. Chronicle of Higher Ed 

Our research demonstrates that students are more likely to cheat when assignments are designed in ways that encourage them to outperform their classmates. In contrast, students are less likely to cheat when teachers assign academic tasks that prompt them to work collaboratively and to focus on mastering content instead of getting a good grade. The Conversation

A common finding (from our survey): Professors realized they needed to get on top of the issue more quickly. It wasn’t enough to wait until problems arose, some wrote, or to simply add an AI policy to their syllabus. They had to talk through scenarios with their students. Chronicle of Higher Ed 

Matthew Swagler, an assistant professor of history at Connecticut College, had instituted a policy that students could use a large language model for assistance, but only if they cited its usage. But that wasn’t sufficient to prevent misuse, he realized, nor prevent confusion among students about what was acceptable. He initiated a class discussion, which was beneficial: “It became clear that the line between which AI is acceptable and which is not is very blurry, because AI is being integrated into so many apps and programs we use.”  Chronicle of Higher Ed

A basic explanation of the new AI bot called ChatGPT

U.S.-based AI research company OpenAI, the San Francisco company behind the text-to-image creation tool named DALL-E, has created a chatbot that responds to user-submitted queries. The model was trained using reinforcement learning from human feedback. ChatGPT (GPT stands for “generative pre-trained transformer”) shows how far artificial intelligence—particularly AI text generators—has come. Because it remembers what you've written or said, the interaction has a dynamic conversational feel. That makes it different from other chatbots, which are static. It could be the basis for a medical chatbot to answer patient questions about very specific symptoms or serve as a personalized therapy bot.

Give the software a prompt — and it creates articles, even poetry. It writes code, too. And explains the code. Or makes correction to errors in code. GPT-3 came before it. Both are generative models. That means they are trained to predict the next word in a sentence. It’s like a topnotch autocompletion tool. What separates ChatGPT from GPT-3 is that ChatGPT goes beyond predicting the next word to also follow the users instructions. Training with examples of human conversations has made the experience with the bot more familiar to users.

ChatGPT is being used to rewrite literary classics, create a bible song about ducks, string cheese sonnet, explain scientific concepts, explain how to remove a peanut butter sandwich from a VCR in the style of the King James Bible, or write a story about a fictitious Ohio-Indiana war. The New York Times gushes, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.” Some tech observers predict it could one day replace Google.

But the software has limitations: such as not having information about 2022 because it doesn’t “crawl” the web like Google in search of new information, it can spit out “plausible-sounding but incorrect answers,” and while it is designed to not provide inappropriate content as creators have taken steps to avoid racist, sexist and offensive outputs that have popped out of other chatbot, there is likely to be some hiccups in that process.

Some warn about its potential abuse—blurring the lines between original writing and plagiarism.

Mike Sharples, a U.K. professor, says such technology “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” 

Ars Technica reporter Benj Edwards writes:

"[I]t’s possible that OpenAI invented history’s most convincing, knowledgeable and dangerous liar — a superhuman fiction machine that could be used to influence masses or alter history." 

Decide for yourself whether we’re on the cusp of new creativity or massive fraud. Create a free account using your email here. Or try the Twitter bot if you’d prefer not to sign up.

Articles about ChatGPT: 

New AI chatbot is scary good – Axios

OpenAI’s new chatbot ChatGPT could be a game-changer for businesses – Tech Monitor  

Google is done. Here’s why OpenAI’s ChatGPT Will Be a Game Changer – Luca Petriconi

The College Essay Is Dead – The Atlantic

The Brilliance and Weirdness of ChatGPT – New York Times

ChatGPT Is Dumber Than You Think - The Atlantic

The Lovelace Effect – AI generated texts should lead us to re-value creativity in academic writing - London School of Economics

Hugging Face GPT-2 Output Detector

AI is finally good at stuff, and that’s a problem - Vox

ChatGPT: How Does It Work Internally? - Toward AI

Your Creativity Won’t Save Your Job From AI - The Atlantic

Could your public photos be used in an AI deepfake? - Ars Technica

API access is expected early in 2023, so companies can create products based on the software. Later next year, rumors say OpenAI will introduce an even better AI model named GPT-4.

Why are some people compelled to cheat?

The fear of losing something appears to be a greater motivator to cheat than the lure of a gain.

Kerry Ritchie, who researches how to improve teaching at the University of Guelph in Ontario, Canada, says the majority of academic cheating is conducted by high-achieving students, (60% of offenders earned grades 80% or more). While cheating in education is not the same as cheating during play, if there are similarities it's that those at the top feel a pressure to maintain their status. Players are more likely to behave dishonestly if they can say that it benefits other people as well as themselves.

William Park writing in BBC Future

Few people can detect a liar

In daily life, without the particular pressures of politics, people find it hard to spot liars. Tim Levine of the University of Alabama, Birmingham, has spent decades running tests that allow participants (apparently unobserved) to cheat. He then asks them on camera if they have played fair. He asks others to look at the recordings and decide who is being forthright about cheating and who is covering it up. In 300 such tests people got it wrong about half of the time, no better than a random coin toss. Few people can detect a liar. Even those whose job is to conduct interviews to dig out hidden truths, such as police officers or intelligence agents, are no better than ordinary folk.

The Economist