Admitting You are Wrong

Cognitive dissonance is what we feel when the self-concept — I’m smart, I’m kind, I’m convinced this belief is true — is threatened by evidence that we did something that wasn’t smart, that we did something that hurt another person, that the belief isn’t true. To reduce dissonance, we have to modify the self-concept or accept the evidence. Guess which route people prefer?

We cling to old ways of doing things, even when new ways are better and healthier and smarter. We cling to self-defeating beliefs long past their shelf life. And we make our partners, co-workers, parents and kids really, really mad at us.

 Carol Tavris quotes in the New York Times and co-author of the book Mistakes Were Made (But Not by Me)

13 quotes worth reading about Generative AI policies & bans

Eaton, the academic-integrity expert, cautions against trying to ban the use of ChatGPT entirely. That, she says, “is not only futile but probably ultimately irresponsible.” Chronicle of Higher Ed

I would compare this to using steroids in baseball. If you don’t ban steroids in baseball, then the reality is every player has to use them. Even worse than that, if you ban them but don’t enforce it, what you actually do is create a situation where you weed out all of the honest players. Chronicle of Higher Ed 

study surveyed 372 students seeking admission into college for fall 2023 and found that nearly half, 39% of those students, would not consider attending a college that's banned ChatGPT or other AI tools. ZDNET

Several leading academic journals and publishers updated their submission guidelines to explicitly ban researchers from listing ChatGPT as a co-author, or using text copied from a ChatGPT response. Some professors have criticized these bans as shortsightedly resistant to an inevitable technological change. Chronicle of Higher Ed

Blocking access to ChatGPT at school won’t matter, at least for any student with access to a tablet or laptop outside of school. Ed Week

ChatGPT and AI writer has been banned in educational institutions around the world, from high schools across America(opens in new tab) and Australia to universities in France and India with some university professors having caught their students using ChatGPT to write their entire assignments. Tech Radar 

A number of universities that say they are planning to expel students who are caught using the software. Thomas Lancaster, a computer scientist and expert on contract cheating at Imperial College London, said many universities were “panicking”. The Guardian 

Los Angeles Unified, the second-­largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network. Others soon joined. By January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia. MIT Tech Review

The New York City Department of Education has banned ChatGPT in its schools, as has the University of Sciences Po, in Paris, citing concerns it may foster rampant plagiarism and undermine learning. Washington Post

Teachers at Oceana High School in Pacifica, California have sent out messages to students warning against using AI-writing software for assignments. Mashable

Washington University in St. Louis and University of Vermont in Burlington are among the institutions that have amended their academic integrity policies to include the usage of AI tools like ChatGPT. Stanford Daily 

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments            

22 examples of teaching with AI                                                           

27 thoughts on teaching with AI   

13 thoughts on the problems of teaching with AI                                  

22 quotes about cheating with AI & plagiarism detection

Students should know that this technology is rapidly evolving: future detectors may be able to retroactively identify auto-generated prose from the past. No one should present auto-generated writing as their own on the expectation that this deception is undiscoverable. Inside Higher Ed

Alex Lawrence, professor at Weber State University, described it as “the greatest cheating tool ever invented.” Wall Street Journal

Some plagiarism detection and learning management systems have adapted surveillance techniques, but that leaves systems designed to ensure original work “locked in an arms race” with systems designed to cheat. Inside Higher Ed

Popular essay submission portal Turnitin is developing its own detector, and Hive claims that its service is more accurate than others on the market, including OpenAI’s very own, and some independent testers have agreed. Tech Radar 

While faculty members will likely spend some time trying to identify a boundary line between AI assistance and AI cheating with respect to student writing, that may not be the best use of their time. That path leads to trying to micromanage students’ use of these models. Inside Higher Ed

You can have tools like Quillbot (that can) paraphrase the essays ChatGPT gives you so it doesn't look too obvious. Mashable

“If I’m a very intelligent AI and I want to bypass your detection, I could insert typos into my writing on purpose.” said Diyi Yang, assistant professor of computer science at Stanford University.  Inside Higher Ed 

But what about the cheaters, the students who let a chatbot do their writing for them? I say, who cares? In my normal class of about 28 students, I encounter one every few semesters whom I suspect of plagiarism. Let’s now say that the temptation to use chatbots for nefarious ends increases the number of cheaters to an (unrealistic) 20 percent. It makes no sense to me that I should deprive 22 students who can richly benefit from having to write papers only to prevent the other six from cheating (some of whom might have cheated even without the help of a chatbot). Washington Post 

If a teacher’s concern is that students will “cheat” with ChatGPT, the answer is to give assignments that are personal and focused on thinking. We don’t have to teach students to follow a writing algorithm any more; there’s an app for that. Forbes

What’s to stop a student from getting ChatGPT to write their work, then tweak it slightly until it no longer gets flagged by a classifier? This does take some effort, but a student may still find this preferable to writing an entire assignment themselves. Tech Radar 

If the concern is that students could cheat, it’s worth remembering that they could cheat six months ago and 60 years ago. Students taking a brand-new exam could already get answers to test questions in minutes from services like Chegg. Students could already plagiarize — or pay someone else to write their entire paper. With the entrance of ChatGPT, “what’s changed is the ease and the scope. Chronicle of Higher Ed

If ChatGPT makes it easy to cheat on an assignment, teachers should throw out the assignment rather than ban the chatbot. MIT Tech Review

Professors can create conditions in which cheating is difficult, giving closed-book, closed-note, closed-internet exams in a controlled environment. They can create assignments in which cheating is difficult, by asking students to draw on what was said in class and to reflect on their own learning. They can make cheating less relevant, by letting students collaborate and use any resource at their disposal. Or they can diminish the forces that make cheating appealing: They can reduce pressure by having more-frequent, lower-stakes assessments. Chronicle of Higher Ed

Unlike accusations of plagiarism, AI cheating has no source document to reference as proof. “This leaves the door open for teacher bias to creep in.” Washington Post

Despite their positive attitude towards AI, many students (in a survey say they) feel anxious and lack clear guidance on how to use AI in the learning environments they are in. It is simply difficult to know where the boundary for cheating lies. Neuroscience News

While the AI-detection feature could be helpful in the immediate term, it could also lead to a surge in academic-misconduct cases, Eaton said. Colleges will have to figure out what to do with those reports at a moment when professors have yet to find consensus on how ChatGPT should be dealt with in their classrooms. Chronicle of Higher Ed

“Do you want to go to war with your students over AI tools?” said Ian Linkletter, who serves as emerging technology and open-education librarian at the British Columbia Institute of Technology. “Or do you want to give them clear guidance on what is and isn’t okay, and teach them how to use the tools in an ethical manner?” Washington Post

Even if detection software gets better at detecting AI generated text, it still causes mental and emotional strain when a student is wrongly accused. “False positives carry real harm,” he said. “At the scale of a course, or at the scale of the university, even a one or 2% rate of false positives will negatively impact dozens or hundreds of innocent students.” Washington Post 

On many campuses, high-course-load contingent faculty and graduate students bear much of the responsibility for the kinds of large-enrollment, introductory-level, general-education courses where cheating is rampant. How can large or even mid-sized colleges withstand the flood of nonsense quasi-plagiarism when academic-integrity first responders are so overburdened and undercompensated? Chronicle of Higher Ed

Bruce Schneier, a public interest technologist and lecturer at Harvard University’s Kennedy School of Government, said any attempts to crackdown on the use of AI chatbots in classrooms is misguided, and history proves that educators must adapt to technology. Washington Post

Harsh punishments for cheating might preserve the status quo, but colleges generally give cheaters a slap on the wrist, and that won’t change. Unmonitored academic work will become optional, or a farce. The only thing that will really matter will be exams. And unless the exams are in-person, they’ll be a farce, too. Chronicle of Higher Ed

“I think we should just get used to the fact that we won’t be able to reliably tell if a document is either written by AI — or partially written by AI, or edited by AI — or by humans,” computer science professor Soheil Feizi said. “We should adapt our education system to not police the use of the AI models, but basically embrace it to help students to use it and learn from it.” Washington Post

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments            

22 examples of teaching with AI                                                           

27 thoughts on teaching with AI   

13 thoughts on the problems of teaching with AI                                               

The paradox of sad music

This is the paradox of sad music: We generally don’t enjoy being sad in real life, but we do enjoy art that makes us feel that way. 

Maybe, because sadness is such an intense emotion, its presence can prompt a positive empathic reaction: Feeling someone’s sadness can move you in some prosocial way.

“You’re feeling just alone, you feel isolated,” Dr. Joshua Knobe (an experimental philosopher and psychologist at Yale University) said. “And then there’s this experience where you listen to some music, or you pick up a book, and you feel like you’re not so alone.”

Read more from Oliver Whang in the New York Times 

New ways of being alive

(At the age of 35) cancer has kicked down the walls of my life. I cannot be certain I will walk my son to his elementary school someday or subject his love interests to cheerful scrutiny. I struggle to buy books for academic projects I fear I can’t finish for a perfect job I may be unable to keep. I have surrendered my favorite manifestoes about having it all, managing work-life balance and maximizing my potential. I cannot help but remind my best friend that if my husband remarries everyone will need to simmer down on talking about how special I was in front of her. (And then I go on and on about how this is an impossible task given my many delightful qualities. Let’s list them. …) Cancer requires that I stumble around in the debris of dreams I thought I was entitled to and plans I didn’t realize I had made.

But cancer has also ushered in new ways of being alive. Even when I am this distant from Canadian family and friends, everything feels as if it is painted in bright colors. In my vulnerability, I am seeing my world without the Instagrammed filter of breezy certainties and perfectible moments. I can’t help noticing the brittleness of the walls that keep most people fed, sheltered and whole. I find myself returning to the same thoughts again and again: Life is so beautiful. Life is so hard. 

Kate Bowler writing in the New York Times

An Apology

A German wholesaler decided to offer a brief apology and rebate to customers who had posted more than 600 complaints about the firm on eBay. Defective products and late deliveries pledged the firm. Half were sent an apology and half were offered a small cash rebate. The result? Nearly half of those who received the apology removed their poor rating of the company. Only one-out-of-five of the customers given the money did so.

Stephen Goforth

Possibility and Despair

Possibility … is to human existence what vowels are to speech. To live in pure possibility is like an infants utterance of vowel sounds, which fail to express something that is definite and clear. Vowels alone do not make for articulate speech, although without them nothing can be said at all. Similarly, “if a human existence is brought to the point where it lacks possibility, then it is in despair and is in deeper every moment it lacks possibility.” One cannot breathe without oxygen, but it is also impossible to breathe pure oxygen. Possibility is a kind of spiritual oxygen that a person cannot live without, but one cannot live on pure possibility either.

C. Stephen Evans, Kierkegaard: An Introduction

Think of AI as a tool

The most pragmatic position is to think of A.I. as a tool, not a creature.

Mythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams.  

If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

A program like OpenAI’s GPT-4, which can write sentences to order, is something like a version of Wikipedia that includes much more data, mashed together using statistics. Programs that create images to order are something like a version of online image search, but with a system for combining the pictures.

Jaron Lanier writing in The New Yorker

My Life with One Arm

Two months to the day after my accident, I went to see a therapist for the first time in my life. I didn’t know where to begin. We discussed loss and resilience and the will to live and adapt. But when I started talking about the outpouring of love and support that I had received since my accident, I began weeping uncontrollably. I realized that for the first time in my life, I was truly letting love into my heart. Losing an arm has connected me to others in a way I have never felt. Yes, I have suffered a tremendous loss, but in a way, I feel as if I have gained much more.

Miles O’Brian, Writing in New York Magazine

The Vacuum Cleaner Method

I know a man who is a tremendous asset to his organization, not because of any extraordinary ability, but because he invariably demonstrates a triumphant thought pattern. Perhaps his associates view a proposition pessimistically, so he employs what he calls the “vacuum cleaner method.” That is, by a series of questions he “sucks the dust” out of his associates’ minds; he draws out their negative attitudes. Then quietly he suggests positive ideas concerning the proposition until a new set of attitudes gives them a new concept of the facts.

They often comment upon how different facts appear when this man “goes to work on them.” It’s the confidence attitude that makes the difference. This doesn’t rule out objectively appraising of facts. The inferiority complex victim sees all facts through discolored attitudes. The secret of correction is simply to gain a normal view, and that is always slanted on the positive side.

Norman Vincent Peale, The Power of Positive Thinking

The Power of Lonely

The nice thing about medicine is it comes with instructions. Not so with solitude, which may be tremendously good for one’s health when taken in the right doses, but is about as user-friendly as an unmarked white pill. Too much solitude is unequivocally harmful and broadly debilitating, decades of research show. But one person’s “too much” might be someone else’s “just enough,” and eyeballing the difference with any precision is next to impossible.

People should be mindfully setting aside chunks of every day when they are not engaged in so-called social snacking activities like texting, g-chatting, and talking on the phone. For teenagers, it may help to understand that feeling a little lonely at times may simply be the price of forging a clearer identity.

“People make this error, thinking that being alone means being lonely, and not being alone means being with other people,” John Cacioppo of the University of Chicago, said. “You need to be able to recharge on your own sometimes. Part of being able to connect is being available to other people, and no one can do that without a break.”

Leon Neyfakh, writing in the Boston Globe

When to Ignore The Customers

In 2009, Walmart lost a tremendous amount of money after having launched “an uncluttering project” and asking their customers whether they’d like “Walmart aisles to be less cluttered?”. An obvious “yes” costed the corporation a billion dollars: sure, customers were happy to see clean isles, but the sales quickly went down.

There are good examples when listening to what the customer wants leads to wrong conclusions: take a New Coke or a 1992 Shevy Caprice for example. Sometimes, running focus groups, testing in the usability lab, facilitating interviews may yield misleading results. Sometimes, ignoring what your customers say is the best course of action.

Kristian Mikhel writing in the UX Collective

Fake scientific papers are alarmingly common ­­­

When neuropsychologist Bernhard Sabel put his new fake-paper detector to work, he was “shocked” by what it found. After screening some 5000 papers, he estimates up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%.

Jeffrey Barinard writing in Science Magazine