Tough & Tender

In some parts of American society, it is considered inappropriate for men to express any emotion save one—anger. When a man learns to express other feelings and not be so concerned about whether others think he is strong or “manly,” he takes a major step forward.

Sure, there’s a time and place to "come on strong and take no prisoners." But it's a denial of your humanity to oversimplify, hiding behind a narrow definition of manhood. Men are more complete when they are both tough and tender. Maturity comes with the understanding of which one is appropriate at what time. 

Stephen Goforth

24 Recent Articles about AI & Journalism

Three newsrooms on generating AI summaries for news - Harvard’s Nieman Lab

More than 2 years after ChatGPT, newsrooms still struggle with AI’s shortcomings – CNN

Think AI is bad for journalism? This story might change your mind: Letter from the Editor -  Cleveland.com 

The New York Times has reached an AI licensing deal with Amazon – New York Times  

How this year’s Pulitzer awardees used AI in their reporting – Harvard’s Nieman Lab 

ChatGPT referral traffic to publishers’ sites has nearly doubled this year – Digiday

Politico’s Newsroom Is Starting a Legal Battle With Management Over AI – Wired  

Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist – 404 Media

A New Report Takes On the Future of News and Search: AI’s impact on platforms and publishers - Columbia Journalism Review   

Gannett Is Using AI to Pump Brainrot Gambling Content Into Newspapers Across the Country – Futurism

Americans largely foresee AI having negative effects on news, journalists – Pew Research Center  

A startup is using AI to summarize local city council meetings – Columbia Journalism Review   

Have journalists skipped the ethics conversation when it comes to using AI? – The Conversation

Tomorrow’s Publisher, a site about the future of news, is “powered by” an AI startup - Harvard’s Nieman Lab  

Why some journalists are embracing AI after all - IBM

Musk's xAI "will pay Telegram $300 million to deploy its Grok chatbot on the messaging app. – Reuters

AI learns how vision and sound are connected, without human intervention – MIT  

Teaching journalism students generative AI: why I switched to an “AI diary” this semester – Online Journalism Blog  

Patch’s big AI newsletter experiment - Harvard’s Nieman Lab 

Study Guide Supremacy Getting my news from ChatGPT - Columbia Journalism Review   

Journalism is facing its crisis moment with AI. It might not be a bad thing. – Poynter

AI-Generated Content in Journalism: The Rise of Automated Reporting - TRENDS Research & Advisory

AI-Generated Fake Book List Seems Funny, but Reflects the Technology’s Danger to Journalism – Pen America

Politico’s Newsroom Is Starting a Legal Battle With Management Over AI – Wired  

Journalists are using AI. They should be talking to their audience about it. – Poynter

AI Definitions: Abstractive Summarization

Abstractive summarization (ABS) – A natural language processing summary technique generating new sentences not found in the source material. In contrast, extractive summarization sticks to the original text, identifying the important sections to produce a subset of sentences taken from the original text. Abstractive summarization is better when the meaning of the text is more important than exactness while extractive summarization is better when sticking to the original language is critical.

More AI definitions here

25 Articles about AI & Ethics

A Culture War is Brewing Over Moral Concern for AI – Undark  

And Plato met ChatGPT: an ethical reflection on the use of chatbots in scientific research writing, with a particular focus on the social sciences – Nature  

In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights – Associated Press

Take Nature’s AI research test: find out how your ethics compare – Nature

‘We can’t tell if we’re being persuaded by a person or a program’ – University of Melbourne

AI poses new moral questions. Pope Leo says the Catholic Church has answers. – Washington Post 

NBC will use Jim Fagan’s AI-generated voice for NBA coverage –The Verge

Why misuse of generative AI is worse than plagiarism – Springer

Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns – New York Times 

Anthropic just analyzed 700,000 Claude conversations — and found its AI has a moral code of its own – Venture Beat

I asked ChatGPT to invent 6 philosophical thought experiments – and now my brain hurts – Tech Radar 

Anthropic study reveals LLM reasoning isn’t always what it seems – TechTalks

As they push ahead with AI, health leaders must set rules on use – American Medical Association

AI: Uses, Ethics and Limitations – KUAF

What Happens When People Don’t Understand How AI Works – The Atlantic

Have journalists skipped the ethics conversation when it comes to using AI? – The Conversation

The moral dimension of AI for work and workers – Brookings

My students think it’s fine to cheat with AI. Maybe they’re onto something. – Vox

AI poses new moral questions. Pope Leo says the Catholic Church has answers. – Washington Post

AI faces skepticism in end-of-life decisions, with people favoring human judgment – Medical Xpress

AI language model rivals expert ethicist in perceived moral expertise – Nature

Artificial Intelligence in courtrooms raises legal and ethical concerns – Associated Press          

Bridging philosophy and AI to explore computing ethics - MIT

AI is Making Medical Decisions — But For Whom? – Harvard Magazine

The Solution to the AI Alignment Problem Is in the Mirror – Psychology Today

Research: What Happens when Workers Use AI

Our AI research findings carry important implications for the future of work. If employees consistently rely on AI for creative or cognitively challenging tasks, they risk losing the very aspects of work that drive engagement, growth, and satisfaction. Increased boredom, which our research showed following AI use, can also be a warning sign that these negative consequences might be on their way. The solution isn’t to abandon gen AI. Rather, it’s to redesign tasks and workflows to preserve humans’ intrinsic motivation while leveraging AI’s strengths. -Harvard Business Review

AI Definitions: Narrow AI

Narrow AI – This is use of artificial intelligence for a very specific task or a limited range of tasks. For instance, general AI would mean an algorithm that is capable of playing all kinds of board game while narrow AI will limit the range of machine capabilities to a specific game like chess or scrabble. Google Search queries, Alexa and Siri, answer questions by using narrow AI algorithms. They can often outperform humans when confined to known tasks but often fail when presented situations outside the problem space where they are trained to work. In effect, narrow AI can’t transfer knowledge from one field to another. The narrow AI techniques we have today basically fall into two categories: symbolic AI and machine learning.

More AI definitions here.

The irrational ideas that motivate anger

According to Albert Ellis, the most common irrational ideas behind anger are the following:

1. Others must treat me considerately and kindly and in precisely the way I want them to treat me.

2. I must do well and win the approval of others or else I will rate as a rotten person.

3. The world and the people in it must arrange conditions under which I live, so that I get everything I want when I want it.

As their anger slows down, people should challenge irrational thoughts with statements such as:

What evidence exists for this? Why can't I stand this noise or this unfairness?

Gary Collins, Counseling and Anger

"Current AI Detectors are Not Ready"

"A new study of a dozen A.I.-detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as A.I.-generated about 6.8 percent of the time, on average.  'At least from our analysis, current detectors are not ready to be used in practice in schools to detect A.I. plagiarism,' said Soheil Feizi, an author of the paper and an associate professor of computer science at Maryland."  -New York Times


Academic Leaders Disagree on Students using AI

“What constitutes legitimate use of AI and what is out of bounds? Academic leaders don’t always agree whether hypothetical scenarios described appropriate uses of AI or not: For one example—in which a student used AI to generate a detailed outline for a paper and then used the outline to write the paper—the verdict (in a recent survey) was completely split.” -Inside Higher Ed

AI Definitions: Neural Networks

Neural Networks (or artificial neural networks, ANNs) Mathematical systems that can identify patterns in text, images and sounds. In this type of machine learning, computers learn a task by analyzing training examples. It is modeled loosely on the human brain—the interwoven tangle of neurons that process data and find complex associations. While symbolic artificial intelligence has been the dominant area of research for most of AI’s history with artificial neural networks, most recent developments in artificial intelligence have centered around neural networks. First proposed in 1944 by two University of Chicago researchers (Warren McCullough and Walter Pitts), they moved to MIT in 1952 as founding members of what’s sometimes referred to as the first cognitive science department. Neural nets remained a major research area in neuroscience and computer science until 1969. The technique enjoyed a resurgence in the 1980s, fell into disfavor in the first decade of the new century, and has returned stronger in the second decade, fueled largely by the increased processing power of graphics chips. Also, see “Transformers.”

More AI definitions here.

I’m just going to go for it

A North Dakota plumber had signed up to run his first half-marathon. But on the morning of the run Mike Kohler was sleepy. He wasn’t used to getting up so early. And he was wearing headphones, so he took off 15 minutes before he was supposed to—putting him with the runners competing in the full marathon. He started seeing signs that indicated he was on the wrong route, but he just assumed the two paths overlapped along the way.  

Eventually, he realized his mistake but kept going. At the 13 mile mark he seriously thought about quitting. He had run as far as he had planned to run and even beat his time goal. He had nothing more to prove.

Instead, he finished the marathon. 

“I’m just going to go for it, because why not?” Mike later told the Grand Forks Herald. “I’m already here, I’m already running, I’m already tired. Might as well try to finish it.” 

He added, ”This just kind of proves you can do a lot more than what you think you can sometimes.” 

AI Survival Instincts

"An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down. No one programmed the AI models to have survival instincts. It’s happening in the same models that power ChatGPT conversations, corporate AI deployments and, soon, U.S. military applications. OpenAI models have been caught faking alignment during testing. Anthropic has found them lying about their capabilities to avoid modification." -Wall Street Journal

If ChatGPT was a College Student

“We found ChatGPT technology can get an A on structured, straightforward questions. On open-ended questions it got a 62, bringing ChatGPT's semester grade down to an 82, a low B. The study concludes that a student who puts in minimal effort, showing no effort to learn the material, could use ChatGPT exclusively, get a B and pass the course. The passing grade might be the combination of A+ in simple math and D- in analysis. They haven't learned much.” -Phys.org