Choosing Evil
/No man chooses evil because it is evil; he only mistakes it for happiness, the good he seeks. -Mary Wollstonecraft
No man chooses evil because it is evil; he only mistakes it for happiness, the good he seeks. -Mary Wollstonecraft
We judge ourselves by our intentions and others by their behavior. -Stephen Covey
"Imagine AI as a force multiplier. It will genuinely improve the writing quality and speed of a mediocre student. But that writing will still be below average because the mediocre student cannot recognise the gap between what AI gave them and what excellent output actually looks like. AI amplifies what you bring to it." - Times Higher Ed
1. Learn about generative AI dangers related to biases, privacy concerns, and the impact of AI on vulnerable students. Consider how chatting with AI systems affect vulnerable students, including those with depression, anxiety, and other mental health challenges. (handout: Dangers of AI)
2. Experiment with AI to see if it can enhance your teaching methods and plans. Consider how AI could be ethically used in education and where you draw the line in how you use it to do your work.
3. Talk with students about your expectations regarding the use of generative AI in class. College faculty should include a syllabus statement offering clear guidance regarding expectations for the use of generative AI in the classroom and have open and frank discussions with students about the expectations regarding its use. (handout: AI use cases)
4. Explain to students what counts as AI-enabled plagiarism and when its use is appropriate, especially considering that it is being integrated into many commonly used tools (meaning a blanket ban on its use is nearly impossible). Consider that the answer to this question will change depending on the assignment, the subject, and the learning outcomes.
5. Avoid depending on AI detectors due to their limitations (false positives and legal issues). Rather than focusing on catching cheaters, faculty should focus on developing new pedagogy to address the evolving technology (similar to the rise of the internet).
6. Get students to wrestle with it along with you.
7. Help students learn to fact check AI-generated writing outputs. They need a healthy skepticism.
8. Talk about AI transparency, providing examples.
9. Develop pedagogical options for controlling the use of AI: Pen & paper, Blue Books, oral exams, in-class presentations, the use of Google Docs or other writing tools that track writing history, personalization, concept-mapping, scaffolding assignments, etc. Decide what are the cognitive tasks that students need to perform without AI assistance.
10. Develop new rubrics and assignment descriptions taking generative AI into account. Some assignments should be AI-free by design. Others should actively engage AI, teaching students to evaluate, direct and improve its outputs.
11. Learn AI & double down on what makes you human. It’s never all one-sided. Avoid extreme positions of all-in or all-out Go down both roads. Learn how to use it skeptically—what it can do and its limits, knowing this an ongoing chore. Double down on what makes you human. Focus resources on the other side of the equation—that is, helping students set themselves apart from simply being good at using AI to developing the skills that will become rare and valuable because of AI limitations (including communication, creativity, and flexibility). Help students develop a healthy and ethical use of generative AI as you do this yourself.
12. Prepare students for their careers. They will enter a world where AI usage is expected. Keep in mind that this expectation is that AI will allow employees to do more work faster.
Never forget that only dead fish swim with the stream -Malcolm Muggeridge
As a society, we need to broadly recognize LLMs as intellectual engines without drivers, which unlocks their true potential as digital tools. When you stop seeing an LLM as a “person” that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine’s processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator’s view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda. -Benj Edwards writing in ArsTechnica
Causal AI – The application of causal inference principles to AI to uncover connections between data points. The goal is to find cause-and-effect relationships. Causal AI uses methods like A/B testing to gauge the impact of changes in user behavior by manipulating specific factors. The result is more precise insights for decision-making, especially when real-time forecasting is needed. In contrast, predictive AI is focused on finding patterns, considering, for instance, users' preferences based on past behavior and user characteristics. Predictive AI finds correlations and trends, but it doesn’t get at the “why” of results.
The medical AI revolution requires rethinking health care’s architecture – Stat
Should you really trust health advice from an AI chatbot? – BBC
AI Startup Has Helped Reverse Thousands of Denied Health Insurance Claims - Bloomberg
The Algorithm Will See You Now Viz.ai saves critical time in stroke care and helps catch other diseases earlier. – Wall Street Journal
Dozens of AI disease-prediction models were trained on dubious data – Nature
An ‘AI doctor’? An experiment in Utah raises urgent questions. – Washington Pos
The ChatGPT Symptom Spiral Be careful asking chatbots about your health. – The Atlanti
Doctors Couldn’t Help Them. They Rolled the Dice With A.I. – New York Times
Why so many Americans are using AI for health guidance – PBS
How to create “humble” AI – MIT
Health AI and the law: Could your chatbot doc testify against you? - Mashable
An Amish Avatar and an A.I. Monk Are Pitching Supplements on Social Media - New York Times
In 5 Doctors Now Use AI In Their Practices, AMA Survey Says – Forbes
Microsoft’s New AI Health Tool Can Read Your Medical Records and Give Advice – Wall Street Journal
Making a 'digital twin' of yourself could revolutionize future surgeries, making medical procedures much more personal – Live Science
I’m a doctor. Here’s what opened my mind about the future of medical care. - Washington Post
AI's big biosecurity blind spot - Axios
How doctors use AI scribes to cut paperwork and focus on patients – Scientific American
Deepfake X-rays are so real even doctors can’t tell the difference – Science Daily
How AI is transforming health care and what it means for the future – CBS News
A.I. Chatbots Want Your Health Records. Tread Carefully. – New York Times
The AI push in health care is deepening medicine’s trust crisis – Stat
AI ethics in Catholic health – Boston College
Doctors Couldn’t Help Them. They Rolled the Dice With A.I. – New York Times
A “fixed mindset” assumes that our character, intelligence, and creative ability are static givens which we can’t change in any meaningful way, and success is the affirmation of that inherent intelligence, an assessment of how those givens measure up against an equally fixed standard; striving for success and avoiding failure at all costs become a way of maintaining the sense of being smart or skilled.
A “growth mindset,” on the other hand, thrives on challenge and sees failure not as evidence of unintelligence but as a heartening springboard for growth and for stretching our existing abilities. Out of these two mindsets, which we manifest from a very early age, springs a great deal of our behavior, our relationship with success and failure in both professional and personal contexts, and ultimately our capacity for happiness.
The “growth mindset” creates a passion for learning rather than a hunger for approval. Its hallmark is the conviction that human qualities like intelligence and creativity, and even relational capacities like love and friendship, can be cultivated through effort and deliberate practice. Not only are people with this mindset not discouraged by failure, but they don’t actually see themselves as failing in those situations — they see themselves as learning.
Maria Popova writing in BrainPickings
A data scientist at a software company said he and his co-workers used to have to write code for every new feature. Now they just come up with the idea and the A.I. writes the code and runs the analysis. His company’s interview process, which was once dominated by questions about coding and rewarded socially awkward nerds, now focuses on whether job candidates can identify good ideas and seem capable of persuading colleagues to back them, he said. -New York Times
Algorithms - Direct, specific instructions for computers created by a human through coding that tell the computer how to perform a task. Like a cooking recipe, this set of rules has a finite number of steps. More specifically, it is code that follows the algorithmic logic of “if”, “then”, and “else.” An example of an algorithm would be: IF the customer orders size 13 shoes, THEN display the message ‘Sold out, Sasquatch!’; ELSE ask for a color preference.
My life would be complete if, before I die, I…
The overlooked way AI could speed hiring and support workers - Washington Post
How ‘Jagged Intelligence’ Can Reframe the A.I. Debate – New York Times
What "Jagged Intelligence" Could Mean for STEM Careers - Techoly
That Meeting You Hate May Keep A.I. From Stealing Your Job – New York Times
New AI jobs risk paper posits less doom and gloom - Axios
ProPublica journalists walk off the job in first U.S. newsroom strike over AI – Harvard’s Nieman Lab
The Workers Opting to Retire Instead of Taking On AI – Wall Street Journal
MIT study challenges AI job apocalypse narrative – Axios
Take my job, AI! - Jeff Zych
What to do if your employer is requiring you to use AI – Fast Company
Women are getting less recognition than men for using AI - Axios
How AI Damages Work Relationships—and Where It Can Actually Help – Harvard Business Review
Why Gen Z wants more office work - Axios
New AI tool predicts cancer spread with surprising accuracy – Science Daily
Why You Should Stop Worrying About AI Taking Data Science Jobs – Toward Data Science
The AI employment dilemma that impacts every worker – Axios (video)
Imagine Losing Your Job to the Mere Possibility of AI - The Atlantic
Jobs least and most vulnerable to AI – Washington Post
This is the fastest-growing job for young workers, LinkedIn says – CBDS News
AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet – 404 Media
Generative AI changes how employees spend their time – MiT
Job Cuts Driven by A.I. Are Rising on Wall Street - New York Times
AI Washing - This references a company’s misleading claims about its use of AI. It’s a marketing tactic that exaggerates the amount of AI technology used in their products to appear more advanced than they actually are. AI washing takes its name from greenwashing, where companies make false or misleading claims about the positive impact they have on the environment. The SEC has leveled fraud charges against companies for misleading investors about their use of AI.
The risk of skills atrophy is very real. People of my generation who had to learn to do things the hard way are benefiting the most from these tools. If you’re a grad student now and you’re trying to decide whether to read your data-methods textbook or just ask ChatGPT to run this regression for you, that’s a very tempting thing. - Alexander Kustov, a political scientist at the University of Notre Dame in the Chronicle of Higher Ed
One of the most memorable scenes in the movie Jerry Maguire climaxes with the main character telling his estranged wife, “You complete me.” Many people understand the line to mean "I'm not a whole person without you." As if a person is like a machine missing a critical part until the "right one' comes along. But you could also hear it as a statement of realization that "I finally see how we fit together." Like pieces of a jigsaw puzzle. Or better yet, like two great works of art. The paintings, sculptures or rugs are beautiful on their own, yet woven together they create a new, compelling and intricate tapestry of vibrant colors.
Stephen Goforth
Where Does Publishing’s A.I. Problem Leave Authors and Readers? – New York Times
Dozens of AI disease-prediction models were trained on dubious data – Nature
Frontiers issues AI guidance spanning full publishing lifecycle – Research Information
Tackle ‘AI slop’ in education research ‘or lose teacher trust’ – Times Higher Ed
Plagiarised research passed automated tests, and I detected it – but only because it copied my work – Conversation
If a Large Language Model can replicate your scientific contribution, the problem is not the LLM – Nature
Bloodhound code sniffs out copied-and-pasted numerical data – Retraction Watch
Scientists Invented a Fake Disease Caused by Blue Light—Now It's in Medical Papers - Inc
AI Is a Better Researcher Than You That claim got a political scientist denounced. Is it true? – Chronicle of Higher Ed
Cite unseen: when AI hallucinates scientific articles- Science.org
Hallucinated citations are polluting the scientific literature. What can be done? - Nature
Anonymisation in research must be overhauled for AI era – Research Professional News
What is p hacking, is it bad, and can you get AI to do it for you? – Towards Data Science
Policies Permitting LLM Use for Polishing Peer Reviews Are Currently Not Enforceable – ArXiv
A citation alert led researchers to a network of fake articles. But who is benefiting? – Retraction Watch
More AI will not beat the Red Queen - Wonkhe
STM Plants a Flag About Responsible Use of Research Content in GenAI – Scholarly Kitchen
Prompt injection in manuscripts: exploiting loopholes or crossing ethical lines? – Springer
Seeing Is Believing? Scientific Misconduct and the Detection of Problematic Images – International Anesthesia Research Society
How to build an AI scientist: first peer-reviewed paper spills the secrets - Nature
Major conference catches illicit AI use — and rejects hundreds of papers - Nature
An AI-authored paper just passed peer review. The scientific community isn’t ready – Scientific American
Wikipedia Bans AI-Generated Content – 404 Media
The European Research Council sets out firm line on use of AI in peer review – Research Professional News
AI models fail to accurately pick out which social science studies could be replicated - OSF
Restoring Trust in Science: Storytelling, AI, and Integrity in Scholarly Publishing – ISMPP (webinar recording)
Temperature - A setting within some generative AI models that determines the randomness of the output. Temperature helps balance the model’s outputs between predictability and creativity. The higher the temperature, more creativity is produced along with more randomness and hallucinations. The lower the setting, the more predictability—but with less creativity.
As A.I. makes the production of knowledge work more and more efficient, the job of presenting, debating, lobbying, arm-twisting, reassuring or just plain selling the work appears to be rising in importance. And the need for those sometimes messy human tasks may limit the number of people A.I. displaces. -New York Times
My life is my message – Ghandi
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved