Businesses Blaming the AI

Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place. 

Ted Chiang writing in The New Yorker

A Work Performance Predicter

Last summer, Gallup released the results of a survey that asked employees whether they had a "best friend" at work. In short, only 2 out of 10 employees said they "strongly agree" with the idea that they do in fact have one.

The idea is "controversial," according to Gallup. "But one stubborn fact about this element of engagement cannot be denied: It predicts performance." 

Specifically, Gallup says answering yes to the "best friend at work" question" can help with other specific areas of employee and engagement, including whether employees reported liking their coworkers in general, being recognized for success, and even just whether they "had a lot of enjoyment" at work on a given day.

Bill Murphy writing in his newsletter Understandably

14 quotes worth reading about AI use in academic papers

ScienceElsevier and Nature were quick to react, updating their respective editorial and publishing policies, stating unconditionally that ChatGPT can’t be listed as an author on an academic paper. It is very hard to define exactly how GPT is used in a particular study as some publishers demand, the same way it is near impossible for authors to detail how they used Google as part of their research. Scholarly Kitchen

An app I have found useful every day is Perplexity. I am most taken with the auto-embedded citations of sources in the response, much like we do in research papers. This is most useful for deeper digging into topics. Inside Higher Ed 

Tools such as Grammarly, Writeful, and even Microsoft grammar checker are relied upon heavily by authors. If an author is using GPT for language purposes, why would that need to be declared and other tools not? What if authors get their ideas for new research from ChatGPT or have GPT analyze their results but write it up in their own words; might that be ok because the author is technically doing the writing? I believe that self-respecting researchers won’t use GPT as a primary source the same way they don’t use Wikipedia in that manner. However, they can use it in a myriad of other ways including brainstorming, sentence construction, data crunching, and more. The onus of responsibility for the veracity of information still falls on the researcher but that doesn’t mean we should run to ban because some might use it as a way to cut corners. Scholarly Kitchen

An academic paper entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an education journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”. What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT. The Guardian

An application that holds great potential to those of us in higher ed is ChatPDF! It is what you might imagine, a tool that allows you to load a PDF of up to 120 pages in length. You can then apply the now-familiar ChatGPT analysis approach to the document itself. Ask for a summary. Dig into specifics. This will be a useful tool for reviewing research and efficiently understanding complex rulings and other legal documents. Inside Higher Ed

If you’ve used ChatGPT or other AI tools in your research, (for APA) describe (in your academic paper) how you used the tool in your Method section or in a comparable section of your paper. For literature reviews or other types of essays or response or reaction papers, you might describe how you used the tool in your introduction. In your text, provide the prompt you used and then any portion of the relevant text that was generated in response. You may also put the full text of long responses from ChatGPT in an appendix of your paper or in online supplemental materials, so readers have access to the exact text that was generated. If you create appendices or supplemental materials, remember that each should be called out at least once in the body of your APA Style paper. APA Style 

Outside of the most empirical subjects, the determinants of academic status will be uniquely human — networking and sheer charisma — making it a great time to reread Dale Carnegie’s How to Win Friends and Influence People. Chronicle of Higher Ed 

The US journal Science, announced an updated editorial policy, banning the use of text from ChatGPT and clarifying that the program could not be listed as an author. Leading scientific journals require authors to sign a form declaring that they are accountable for their contribution to the work. Since ChatGPT cannot do this, it cannot be an author. The Guardian

A chatbot was deemed capable of generating quality academic research ideas. This raises fundamental questions around the meaning of creativity and ownership of creative ideas — questions to which nobody yet has solid answers. Our suspicion here is that ChatGPT is particularly strong at taking a set of external texts and connecting them (the essence of a research idea), or taking easily identifiable sections from one document and adjusting them (an example is the data summary — an easily identifiable “text chunk” in most research studies). A relative weakness of the platform became apparent when the task was more complex - when there are too many stages to the conceptual process. The Conversation 

Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture. Chronicle of Higher Ed 

For most professors, writing — even bad first drafts or outlines — requires our labor (and sometimes strain) to develop an original thought. If the goal is to write a paper that introduces boundary-breaking new ideas, AI tools might reduce some of the intellectual effort needed to make that happen. Some will see that as a smart use of time, not evidence of intellectual laziness. Chronicle of Higher Ed

The quality of scientific research will erode if academic publishers can't find ways to detect fake AI-generated images in papers. In the best-case scenario, this form of academic fraud will be limited to just paper mill schemes that don't receive much attention anyway. In the worst-case scenario, it will impact even the most reputable journals and scientists with good intentions will waste time and money chasing false ideas they believe to be true. The Register 

Many journals’ new policies require that authors disclose use of text-generating tools and ban listing a large language model such as ChatGPT as a co-author, to underscore the human author’s responsibility for ensuring the text’s accuracy. That is the case for Nature and all Springer Nature journalsthe JAMA Network, and groups that advise on best practices in publishing, such as the Committee on Publication Ethics and the World Association of Medical Editors. Science

Just as publishers begin to get a grip on manual image manipulation, another threat is emerging. Some researchers may be tempted to use generative AI models to create brand-new fake data rather than altering existing photos and scans. In fact, there is evidence to suggest that sham scientists may be doing this already. A spokesperson for Uncle Sam's defense research agency confirmed it has spotted fake medical images in published science papers that appear to be generated using AI. The Register

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

13 Quotes worth reading about AI’s impact on College Administrators & Faculty

17 articles about AI & Academic Scholarship                                        

13 thoughts on the problems of teaching with AI

There is a reason why educational video games are not as engaging as regular video games. There is a reason why AI-generated educational videos will never be as engaging as regular videos. Brenda Laurel pointed to the ‘chocolate-covered broccoli’ problem over 20 years ago … her point still stands. EdSurge

While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Jenna Lyle, a spokesperson for the New York City Department of Education. Mashable 

This tech is being primarily pitched as a money-saving device—so it will be taken up by school authorities that are looking to save money. As soon as a cash-strapped administrator has decided that they’re happy to let technology drive a whole lesson, then they no longer need a highly-paid professional teacher in the room—they just need someone to trouble-shoot any glitches and keep an eye on the students. EdSurge 

Some commentators are urging teachers to introduce ChatGPT into the curriculum as early as possible (a valuable revenue stream and data source). Students, they argue, must begin to develop new skills such as prompt engineering. What these (often well-intentioned) techno-enthusiasts forget is that they have decades of writing solo under their belts. Just as drivers who turn the wheel over to flawed autopilot systems surrender their judgment to an over-hyped technology, so a future generation raised on language models could end up, in effect, never learning to drive. Public Books

Some professors have leapt out front, producing newsletters, creating explainer videos, and crowdsourcing resources and classroom policies. The one thing that academics can’t afford to do, teaching and tech experts say, is ignore what’s happening. Sooner or later, the technology will catch up with them, whether they encounter a student at the end of the semester who may have used it inappropriately, or realize that it’s shaping their discipline and their students’ futures in unstoppable ways. Chronicle of Higher Ed

(There is a) notion that college students (can) learn to write by using chatbots to generate a synthetic first draft, which they afterwards revise, overlooks the fundamentals of a complex process. Since text generators do a good job with syntax, but suffer from simplistic, derivative, or inaccurate content, requiring students to work from this shallow foundation is hardly the best way to empower their thinking, hone their technique, or even help them develop a solid grasp of an LLM’s limitations. The purpose of a college research essay is not to teach students how to fact-check and gussy up pre-digested pablum. It is to enable them to develop and substantiate their own robust propositions and truth claims. Public Books  

If a professor runs students’ work through a detector without informing them in advance, that could be an academic-integrity violation in itself.  The student could then appeal the decision on grounds of deceptive assessment, “and they would probably win.” Chronicle of Higher Ed

We are dangerously close to creating two strata of students: those whom we deem smart and insightful and deeply thoughtful, if sometimes guilty of a typo, and those who seem less engaged with the material, or less able to have serious thoughts about it. Inside Higher Ed

The challenge here is in communicating to students that AI isn’t a replacement for real thinking or critical analysis, and that heavy reliance on such platforms can lead away from genuine learning. Also, because AI platforms like ChatGPT retrieve information from multiple unknown sources, and the accuracy of the information cannot be guaranteed, students need to be wary about using the chatbot’s content. The Straits Times 

It seems futile for faculty members to spend their energies figuring out what a current version can’t do. Chronicle of Higher Ed

It is important to be aware that ChatGPT’s potential sharing of personal information with third parties may raise serious privacy concerns for your students and perhaps in particular for students from marginalized backgrounds. Barnard College

How might chatting with AI systems affect vulnerable students, including those with depression, anxiety, and other mental-health challenges? Chronicle of Higher Ed 

Students need considerable support to make sure ChatGPT promotes learning rather than getting in the way of it. Some students find it harder to move beyond the tool’s output and make it their own. “It needs to be a jumping-off point rather than a crutch.” MIT Tech Review

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

The Wrong People

Stop spending time with the wrong people.  Life is too short to spend time with people who suck the happiness out of you. If someone wants you in their life, they’ll make room for you. You shouldn’t have to fight for a spot. Never, ever insist yourself to someone who continuously overlooks your worth. And remember, it’s not the people that stand by your side when you’re at your best, but the ones who stand beside you when you’re at your worst that are your true friends.

Marc Chernoff, read more here.

The Past vs. Possibilities

When I encounter a $2.89 cup of coffee, it’s all too easy for me to recall what I paid for coffee the day before and not so easy for me to imagine all the other things I might buy with my money.

Because it is so much easier for me to remember the past than to generate new possibilities, I will tend to compare the present with the past even when I ought to be comparing it with the possible.

And that is indeed what I ought to be doing because it really doesn’t matter what coffee cost the day before, the week before, or at any time during the Hoover administration. Right now I have absolute dollars to spend and the only questions I need to answer is how to spend them in order to maximize my satisfaction. If an international bean embargo suddenly caused the price of coffee to skyrocket to $10,000 per cup, then the only question I would need to ask myself is:

“What else can I do with ten thousand dollars, and will it bring me more or less satisfaction than a cup of coffee?”

Daniel Gilbert, Stumbling on Happiness

20 quotes worth reading about students using AI

For students who do not self-identify as writers, for those who struggle with writer’s block or for underrepresented students seeking to find their voices, it can provide a meaningful assist during initial stages of the writing process. Inside Higher Ed

Let’s be honest. Ideas are more important than how they are written. So, I use ChatGPT to help me organize my ideas better and make them sound more professional. The Tech Insider

Students could (use AI to) look for where the writing took a predictable turn or identify places where the prose is inconsistent. Students could then work to make the prose more intellectually stimulating for humans. Inside Higher Ed

If you’re a college student preparing for life in an A.I. world, you need to ask yourself: Which classes will give me the skills that machines will not replicate, making me more distinctly human? A.I. often churns out the kind of impersonal bureaucratic prose that is found in corporate communications or academic journals. You’ll want to develop a voice as distinct as those of George Orwell, Joan Didion, Tom Wolfe and James Baldwin, so take classes in which you are reading distinctive and flamboyant voices so you can craft your own. New York Times

Imagine if the platform extracted campus-specific information about gen ed and major requirements. It could then provide quality academic advice to students that current chat bots can’t. Inside Higher Ed

ChatGPT may be able to help with more basic functions, such as assisting with writing in English for those who do not speak it natively. Tech Radar

What if the platform had access to real-time local or regional job market data and trends and data about the efficacy of various skills certificates? It could then serve as initial-tier career counseling. Inside Higher Ed

On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool writing papers and solving coding problems. New York Times

The student who is using it because they lack the expertise is exactly the student who is not ready to assess what it’s doing critically. Some argue that it’s not worth the time spent ferreting out a few cheaters and would rather focus their energy on students who are there to learn. Others say they can’t afford to look the other way. Chronicle of Higher Ed

It used to be about mastery of content. Now, students need to understand content, but it’s much more about mastery of the interpretation and utilization of the content. Inside Higher Ed

Don’t fixate on how much evidence you have but on how much evidence will persuade your intended audience. ChatGPT distills everything on the internet through its filter and dumps it on the reader; your flawed and beautiful mind, by contrast, makes its mark on your subject by choosing the right evidence, not all the evidence. Find the six feet that your reader needs, and put the rest of your estate up for auction. Chronicle of Higher Ed

A.I. is good at predicting what word should come next, so you want to be really good at being unpredictable, departing from the conventional. New York Times 

We surpass the AI by standing on its shoulders. Boris Steipe, associate professor of molecular genetics at the University of Toronto, for example, encourages students to engage in a Socratic debate with ChatGPT as a way of thinking through a question and articulating an argument. “You will get the plain vanilla answer—what everybody thinks—from ChatGPT,” Steipe said, “That’s where you need to start to think. That’s where you need to ask, ‘How is it possibly incomplete?’” Inside Higher Ed

Students can leverage ChatGPT as a tutor or homework supplement, especially if they need to catch up. ChatGPT’s ability to make curated responses is unparalleled, so if a student needs a scientific explanation for a sixth-grade reading level, ChatGPT can adapt. New York Magazine

The common fear among teachers is that AI is actually writing our essays for us, but that isn’t what happens. The more effective, and increasingly popular, strategy is to tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Depending on the topic, you might even be able to have it write each paragraph the outline calls for, one by one, then rewrite them yourself to make them flow better. Chronicle of Higher Ed

Marc Watkins, lecturer in composition and rhetoric at the University of Mississippi: “Our students are not John Henry, and AI is not a steam-powered drilling machine that will replace them. We don’t need to exhaust ourselves trying to surpass technology.” Inside Higher Ed

These tools can function like personal assistants: Ask ChatGPT to create a study schedule, simplify a complex idea, or suggest topics for a research paper, and it can do that. That could be a boon for students who have trouble managing their time, processing information, or ordering their thoughts. Chronicle of Higher Ed

Students who lack confidence in their ability to learn might allow the products of these AI tools to replace their own voices or ideas.  Chronicle of Higher Ed

Students describe using OpenAI’s tool as well as others for much more than generating essays. They are asking the bots to create workout plans, give relationship advice, suggest characters for a short story, make a joke and provide recipes for the random things left in their refrigerators. Washington Post

Basak-Odisio will use it only, he said, if he has procrastinated too much and is facing an impossible deadline. “If it is the day or night before, and I want to finish something as quickly as possible — ” he said, trailing off. “But,” he added, “I want to be better than that.” Washington Post

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                                         

27 quotes about AI & writing assignments            

22 examples of teaching with AI                                                           

27 thoughts on teaching with AI             

22 quotes about cheating with AI & plagiarism detection            

Admitting You are Wrong

Cognitive dissonance is what we feel when the self-concept — I’m smart, I’m kind, I’m convinced this belief is true — is threatened by evidence that we did something that wasn’t smart, that we did something that hurt another person, that the belief isn’t true. To reduce dissonance, we have to modify the self-concept or accept the evidence. Guess which route people prefer?

We cling to old ways of doing things, even when new ways are better and healthier and smarter. We cling to self-defeating beliefs long past their shelf life. And we make our partners, co-workers, parents and kids really, really mad at us.

 Carol Tavris quotes in the New York Times and co-author of the book Mistakes Were Made (But Not by Me)

13 quotes worth reading about Generative AI policies & bans

Eaton, the academic-integrity expert, cautions against trying to ban the use of ChatGPT entirely. That, she says, “is not only futile but probably ultimately irresponsible.” Chronicle of Higher Ed

I would compare this to using steroids in baseball. If you don’t ban steroids in baseball, then the reality is every player has to use them. Even worse than that, if you ban them but don’t enforce it, what you actually do is create a situation where you weed out all of the honest players. Chronicle of Higher Ed 

study surveyed 372 students seeking admission into college for fall 2023 and found that nearly half, 39% of those students, would not consider attending a college that's banned ChatGPT or other AI tools. ZDNET

Several leading academic journals and publishers updated their submission guidelines to explicitly ban researchers from listing ChatGPT as a co-author, or using text copied from a ChatGPT response. Some professors have criticized these bans as shortsightedly resistant to an inevitable technological change. Chronicle of Higher Ed

Blocking access to ChatGPT at school won’t matter, at least for any student with access to a tablet or laptop outside of school. Ed Week

ChatGPT and AI writer has been banned in educational institutions around the world, from high schools across America(opens in new tab) and Australia to universities in France and India with some university professors having caught their students using ChatGPT to write their entire assignments. Tech Radar 

A number of universities that say they are planning to expel students who are caught using the software. Thomas Lancaster, a computer scientist and expert on contract cheating at Imperial College London, said many universities were “panicking”. The Guardian 

Los Angeles Unified, the second-­largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network. Others soon joined. By January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia. MIT Tech Review

The New York City Department of Education has banned ChatGPT in its schools, as has the University of Sciences Po, in Paris, citing concerns it may foster rampant plagiarism and undermine learning. Washington Post

Teachers at Oceana High School in Pacifica, California have sent out messages to students warning against using AI-writing software for assignments. Mashable

Washington University in St. Louis and University of Vermont in Burlington are among the institutions that have amended their academic integrity policies to include the usage of AI tools like ChatGPT. Stanford Daily 

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments            

22 examples of teaching with AI                                                           

27 thoughts on teaching with AI   

13 thoughts on the problems of teaching with AI                                  

The paradox of sad music

This is the paradox of sad music: We generally don’t enjoy being sad in real life, but we do enjoy art that makes us feel that way. 

Maybe, because sadness is such an intense emotion, its presence can prompt a positive empathic reaction: Feeling someone’s sadness can move you in some prosocial way.

“You’re feeling just alone, you feel isolated,” Dr. Joshua Knobe (an experimental philosopher and psychologist at Yale University) said. “And then there’s this experience where you listen to some music, or you pick up a book, and you feel like you’re not so alone.”

Read more from Oliver Whang in the New York Times 

New ways of being alive

(At the age of 35) cancer has kicked down the walls of my life. I cannot be certain I will walk my son to his elementary school someday or subject his love interests to cheerful scrutiny. I struggle to buy books for academic projects I fear I can’t finish for a perfect job I may be unable to keep. I have surrendered my favorite manifestoes about having it all, managing work-life balance and maximizing my potential. I cannot help but remind my best friend that if my husband remarries everyone will need to simmer down on talking about how special I was in front of her. (And then I go on and on about how this is an impossible task given my many delightful qualities. Let’s list them. …) Cancer requires that I stumble around in the debris of dreams I thought I was entitled to and plans I didn’t realize I had made.

But cancer has also ushered in new ways of being alive. Even when I am this distant from Canadian family and friends, everything feels as if it is painted in bright colors. In my vulnerability, I am seeing my world without the Instagrammed filter of breezy certainties and perfectible moments. I can’t help noticing the brittleness of the walls that keep most people fed, sheltered and whole. I find myself returning to the same thoughts again and again: Life is so beautiful. Life is so hard. 

Kate Bowler writing in the New York Times

An Apology

A German wholesaler decided to offer a brief apology and rebate to customers who had posted more than 600 complaints about the firm on eBay. Defective products and late deliveries pledged the firm. Half were sent an apology and half were offered a small cash rebate. The result? Nearly half of those who received the apology removed their poor rating of the company. Only one-out-of-five of the customers given the money did so.

Stephen Goforth