What if the AI prompts You?

You can prompt these tools to ask you questions, to get you thinking, to prompt you to start writing. The instinct is to say, 'Oh, this thing just writes for us.' But it can also ask me questions. It can also get me thinking and shape my ideas. What if instead of you being a prompt engineer, you see what it can prompt out of you? The Al can be a nonjudgmental collaborator that helps pull out these great, unique insights from you. -Stew Fortier, founder of Type.ai

AI Definitions: Training Data

Training data – This is the data initially provided to an AI model so it can create a map of relationships, which it then uses to make predictions. Giving the AI a wide data means more options and may lead to more creative results. However, this can also make it more vulnerable to the insertion of poisoned data by hackers and make the model more susceptible to hallucinations. Using more curated, locked-down data sets makes AI models less vulnerable and more predictable but also less creative.  

More AI definitions here

Is Artificial General Intelligence Around the Corner?

In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today’s technology were unlikely to lead to A.G.I. Scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of A.G.I.’s imminent arrival are based on statistical extrapolations — and wishful thinking. -New York Times

Tough & Tender

In some parts of American society, it is considered inappropriate for men to express any emotion save one—anger. When a man learns to express other feelings and not be so concerned about whether others think he is strong or “manly,” he takes a major step forward.

Sure, there’s a time and place to "come on strong and take no prisoners." But it's a denial of your humanity to oversimplify, hiding behind a narrow definition of manhood. Men are more complete when they are both tough and tender. Maturity comes with the understanding of which one is appropriate at what time. 

Stephen Goforth

24 Recent Articles about AI & Journalism

Three newsrooms on generating AI summaries for news - Harvard’s Nieman Lab

More than 2 years after ChatGPT, newsrooms still struggle with AI’s shortcomings – CNN

Think AI is bad for journalism? This story might change your mind: Letter from the Editor -  Cleveland.com 

The New York Times has reached an AI licensing deal with Amazon – New York Times  

How this year’s Pulitzer awardees used AI in their reporting – Harvard’s Nieman Lab 

ChatGPT referral traffic to publishers’ sites has nearly doubled this year – Digiday

Politico’s Newsroom Is Starting a Legal Battle With Management Over AI – Wired  

Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist – 404 Media

A New Report Takes On the Future of News and Search: AI’s impact on platforms and publishers - Columbia Journalism Review   

Gannett Is Using AI to Pump Brainrot Gambling Content Into Newspapers Across the Country – Futurism

Americans largely foresee AI having negative effects on news, journalists – Pew Research Center  

A startup is using AI to summarize local city council meetings – Columbia Journalism Review   

Have journalists skipped the ethics conversation when it comes to using AI? – The Conversation

Tomorrow’s Publisher, a site about the future of news, is “powered by” an AI startup - Harvard’s Nieman Lab  

Why some journalists are embracing AI after all - IBM

Musk's xAI "will pay Telegram $300 million to deploy its Grok chatbot on the messaging app. – Reuters

AI learns how vision and sound are connected, without human intervention – MIT  

Teaching journalism students generative AI: why I switched to an “AI diary” this semester – Online Journalism Blog  

Patch’s big AI newsletter experiment - Harvard’s Nieman Lab 

Study Guide Supremacy Getting my news from ChatGPT - Columbia Journalism Review   

Journalism is facing its crisis moment with AI. It might not be a bad thing. – Poynter

AI-Generated Content in Journalism: The Rise of Automated Reporting - TRENDS Research & Advisory

AI-Generated Fake Book List Seems Funny, but Reflects the Technology’s Danger to Journalism – Pen America

Politico’s Newsroom Is Starting a Legal Battle With Management Over AI – Wired  

Journalists are using AI. They should be talking to their audience about it. – Poynter

AI Definitions: Abstractive Summarization

Abstractive summarization (ABS) – A natural language processing summary technique generating new sentences not found in the source material. In contrast, extractive summarization sticks to the original text, identifying the important sections to produce a subset of sentences taken from the original text. Abstractive summarization is better when the meaning of the text is more important than exactness while extractive summarization is better when sticking to the original language is critical.

More AI definitions here

25 Articles about AI & Ethics

A Culture War is Brewing Over Moral Concern for AI – Undark  

And Plato met ChatGPT: an ethical reflection on the use of chatbots in scientific research writing, with a particular focus on the social sciences – Nature  

In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights – Associated Press

Take Nature’s AI research test: find out how your ethics compare – Nature

‘We can’t tell if we’re being persuaded by a person or a program’ – University of Melbourne

AI poses new moral questions. Pope Leo says the Catholic Church has answers. – Washington Post 

NBC will use Jim Fagan’s AI-generated voice for NBA coverage –The Verge

Why misuse of generative AI is worse than plagiarism – Springer

Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns – New York Times 

Anthropic just analyzed 700,000 Claude conversations — and found its AI has a moral code of its own – Venture Beat

I asked ChatGPT to invent 6 philosophical thought experiments – and now my brain hurts – Tech Radar 

Anthropic study reveals LLM reasoning isn’t always what it seems – TechTalks

As they push ahead with AI, health leaders must set rules on use – American Medical Association

AI: Uses, Ethics and Limitations – KUAF

What Happens When People Don’t Understand How AI Works – The Atlantic

Have journalists skipped the ethics conversation when it comes to using AI? – The Conversation

The moral dimension of AI for work and workers – Brookings

My students think it’s fine to cheat with AI. Maybe they’re onto something. – Vox

AI poses new moral questions. Pope Leo says the Catholic Church has answers. – Washington Post

AI faces skepticism in end-of-life decisions, with people favoring human judgment – Medical Xpress

AI language model rivals expert ethicist in perceived moral expertise – Nature

Artificial Intelligence in courtrooms raises legal and ethical concerns – Associated Press          

Bridging philosophy and AI to explore computing ethics - MIT

AI is Making Medical Decisions — But For Whom? – Harvard Magazine

The Solution to the AI Alignment Problem Is in the Mirror – Psychology Today

Research: What Happens when Workers Use AI

Our AI research findings carry important implications for the future of work. If employees consistently rely on AI for creative or cognitively challenging tasks, they risk losing the very aspects of work that drive engagement, growth, and satisfaction. Increased boredom, which our research showed following AI use, can also be a warning sign that these negative consequences might be on their way. The solution isn’t to abandon gen AI. Rather, it’s to redesign tasks and workflows to preserve humans’ intrinsic motivation while leveraging AI’s strengths. -Harvard Business Review

AI Definitions: Narrow AI

Narrow AI – This is use of artificial intelligence for a very specific task or a limited range of tasks. For instance, general AI would mean an algorithm that is capable of playing all kinds of board game while narrow AI will limit the range of machine capabilities to a specific game like chess or scrabble. Google Search queries, Alexa and Siri, answer questions by using narrow AI algorithms. They can often outperform humans when confined to known tasks but often fail when presented situations outside the problem space where they are trained to work. In effect, narrow AI can’t transfer knowledge from one field to another. The narrow AI techniques we have today basically fall into two categories: symbolic AI and machine learning.

More AI definitions here.

The irrational ideas that motivate anger

According to Albert Ellis, the most common irrational ideas behind anger are the following:

1. Others must treat me considerately and kindly and in precisely the way I want them to treat me.

2. I must do well and win the approval of others or else I will rate as a rotten person.

3. The world and the people in it must arrange conditions under which I live, so that I get everything I want when I want it.

As their anger slows down, people should challenge irrational thoughts with statements such as:

What evidence exists for this? Why can't I stand this noise or this unfairness?

Gary Collins, Counseling and Anger

"Current AI Detectors are Not Ready"

"A new study of a dozen A.I.-detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as A.I.-generated about 6.8 percent of the time, on average.  'At least from our analysis, current detectors are not ready to be used in practice in schools to detect A.I. plagiarism,' said Soheil Feizi, an author of the paper and an associate professor of computer science at Maryland."  -New York Times


Academic Leaders Disagree on Students using AI

“What constitutes legitimate use of AI and what is out of bounds? Academic leaders don’t always agree whether hypothetical scenarios described appropriate uses of AI or not: For one example—in which a student used AI to generate a detailed outline for a paper and then used the outline to write the paper—the verdict (in a recent survey) was completely split.” -Inside Higher Ed