Bumper Sticker Catch Phrases

We need to be careful about staking the important ethical decisions in our lives on bumper sticker catch phrases. The problem is that the ideas expressed in these bite-sized pronouncements have broader implications.

While the ethical aspect that is explicit in the bumper sticker may look good at first glance, other ideas that follow from it may not be so attractive. Most of us have heard or used the cliché “When in Rome, do as the Romans do,” and it can sound like worthwhile advice. But what if the standard practices of the “Romans” stand in direct conflict with your moral or religious convictions? The is why we need to get behind the cliché’ itself.

Before we commit ourselves to any bumper sticker, we want to make certain that we can accept all that is implied in the slogan.

Steve Wilkens, Beyond Bumper Sticker Ethics

10 Webinars about AI, Media & Journalism

Thu, May 30 - AI and Visual Journalism: Ethics, tech, copyright and more considerations for newsrooms and photojournalists

What: A high-level discussion about what newsroom leaders and visual journalists need to know about AI technologies. This virtual session will cover the considerations you need to take into account and how to be talking about AI and visuals in your newsroom.

Who: Tony Elkins, Faculty, Poynter; Alicia Wagner Calzada, Deputy General Counsel, National Press Photographers Association; Sandra M. Stevenson, Deputy Director of Photography, The Washington Post

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Online News Association

More Info

 

Thu, May 30 - Community engagement tools to inform election coverage

What: Research and strategies for engaged election coverage this year, including: New findings on how Americans view national and location election news. Simple structures and tools for authentic community engagement that can inform and build trust in your election reporting.

Who: Kevin Loker is a director of strategic partnerships and research at API.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsors: American Press Institute, Associated Press, New England Newspaper & Press Association

More Info

 

Thu, May 30 - Community Asset Mapping 101

What: By the end of the session, you will have a foundational starting point for your asset map and clear next steps for completing and refining it. This skill is invaluable for any journalist, regardless of their role in the newsroom.

Who: Letrell Deshan Crittenden, Director of Inclusion and Audience Growth, American Press Institute

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: New England Equity Reporting Fellowship

More Info

 

Tue, June 4 - Introduction to AI for Nonprofits

What: Learn how to enhance your nonprofit’s website and basic marketing strategies using AI.

Who: Tareq Monuar Web Developer; Jon Hill Tapp Network

When: 12 noon

Where: Zoom

Cost: Free

Sponsor: TechSoup

More Info

 

Tue, June 4 - Ask Nikita Roy All your Newsroom AI Questions

What: Drawing from her conversations with over 50 industry leaders on the Newsroom Robots podcast, Nikita is here to help you with your questions on everything from selecting tools to training models to maintaining journalistic integrity. Come with your questions—no matter how big or small—and let’s dive into a lively discussion on making AI work for your newsroom.

Who: Nikita Roy is a data scientist, journalist, and Harvard-recognized AI futurist.

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Online News Association

More Info

 

Tue, June 4 - Literary Theory for Robots: How Computers Learned to Write

What: Looking at the shared pasts of literature and computer science, this will provide a context for recent developments in artificial intelligence. Yi Tenen draws on labor history, technology, and philosophy to examine why he views AI as a reflection of the long-standing cooperation between authors and engineers.

Who: Former Microsoft engineer and professor of comparative literature Dennis Yi Tenen

When: 6:45 pm, Eastern

Where: Zoom

Cost: $25 for nonmembers

Sponsor: Smithsonian

More Info

 

Wed, June 5 - How to Use AI Responsibly

What: Hear from government and industry leaders about the attributes associated with responsible AI and how to use it effectively at your organization.

Who: Beth Noveck, Chief AI Strategist, State of New Jersey; David Larrimore, Chief Technology Officer, Office of the Chief Information Officer, Department of Homeland Security

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: GovLoop

More Info

 

Wed, June 5 – Mini Lab: Prompt-Writing

What:  Join us for a quick, facilitated session where you’ll get to experiment with various prompts for ChatGPT, Gemini and Claude.ai. Participants will have a chance to show some of their work and discuss ways it can be used. series.

Who: Mike Reilley, Senior Lecturer, University of Illinois-Chicago

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Online News Association

More Info

 

Wed, June 5 – Executive Directors Chat: Leveraging AI for Diversity, Equity, and Inclusion

What: Learn how to leverage AI to advance diversity, equity, and inclusion goals for your nonprofit's DEI initiatives.

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: TechSoup

More Info

 

Fri, June 7 - Media & The Law: What journalists need to know about copyright & defamation suits

What: Copyright and libel law essentials for today’s media environment with an opportunity for questions to help journalists and freelancers understand their rights and how to follow the law.

Who: Chad R. Bowman‘s practice focuses on working with new and legacy media organizations, such as The Associated Press, The New Yorker, and The Washington Post.

When: 11:30 am, Eastern

Where: Zoom

Cost: Free

Sponsor: The National Press Club’s Journalism Institute

More Info

Get in a creative mood by visiting large spaces

If you’re in a cramped space, say your office is a little cubicle, your visual attention can’t spread out. It’s focused in this narrow space. Just as your visual attention is constricted, your conceptual attention becomes narrow and focused, and your thinking is more likely to be analytical.

But if you’re in a large space – a big office, with high ceilings, or outside — your visual attention expands to fill the space, and your conceptual attention expands.

That’s why a lot of creative figures like to be outdoors, to take long walks in nature, and they get their inspiration from being in the wide, open spaces. If you can see far and wide, then you can think far and wide.

Brigid Schulte writing in the Washington Post

18 articles about AI Fakes    

These ISIS news anchors are AI fakes. Their propaganda is real. – Washington Post

Generative AI poses Threat to election security, intelligence agencies warn – CBS News

Bank of Italy warns against AI-powered fake videos – Reuters

Google's AI Watermarks Will Identify Deepfakes – Dark Reading

In novel case, U.S. charges man with making child sex abuse images with AI – Washington Post

Voice-cloning technology bringing a key Supreme Court moment to 'life' – Associated Press

Flood of Fake Science Forces Multiple Journal Closures – Wall Street Journal

New UK law targets “despicable individuals” who create AI sex deepfakes - Ars Technica 

She was accused of faking an incriminating video but nothing was fake after all  - The Guardian

TikTok’s AI watermarks could help curb deepfakes, but it’s no panacea – Semafor

OpenAI Releases ‘Deepfake’ Detector to Disinformation Researchers – New York Times 

Microsoft and OpenAI launch $2M fund to counter election deepfakes – Tech Crunch  

OpenAI Says It Can Now Detect Images Spawned by Its Software—Most of the Time – Wall Street Journal

How AI-generated disinformation might impact this year’s elections and how journalists should report on it – Reuters Institute  

How Generative AI Is Helping Fact-Checkers Flag Election Disinformation, But Is Less Useful in the Global South – Global Investigative Journalism Network  

In Arizona, election workers trained with deepfakes to prepare for 2024 – Washington Post

Excessive use of words like ‘commendable’ and ‘meticulous’ suggests ChatGPT has been used in thousands of scientific studies - EL PAÍS English

Fooled by AI? These firms sell deepfake detection - Washington Post

Tech created a global village — and puts us at each other’s throats

As we get additional information about others, we place greater stress on the ways those people differ from us than on the ways they resemble us, and this inclination to emphasize dissimilarities over similarities strengthens as the amount of information accumulates. On average, we like strangers best when we know the least about them.

The effect intensifies in the virtual world, where everyone is in everyone else’s business. Social networks like Facebook and messaging apps like Snapchat encourage constant self-disclosure. Because status is measured quantitatively online, in numbers of followers, friends, and likes, people are rewarded for broadcasting endless details about their lives and thoughts through messages and photographs. To shut up, even briefly, is to disappear. One study found that people share four times as much information about themselves when they converse through computers as when they talk in person.

Progress toward a more amicable world will require not technological magic but concrete, painstaking, and altogether human measures: negotiation and compromise, a renewed emphasis on civics and reasoned debate, a citizenry able to appreciate contrary perspectives. At a personal level, we may need less self-expression and more self-examination.

Technology is an amplifier. It magnifies our best traits, and it magnifies our worst.

Nicholas Carr writing in the Boston Globe

Technology that makes us less human

Like an episode out of Black Mirror, the machines have arrived to teach us how to be human even as they strip us of our humanity. Artificial intelligence could significantly diminish humanity, even if machines never ascend to superintelligence, by sapping the ability of human beings to do human things. “We’re seeing a general trend of selling AI as ‘empowering,’ a way to extend your ability to do something, whether that’s writing, making investments, or dating,” AI expert Leif Weatherby explained. “But what really happens is that we become so reliant on algorithmic decisions that we lose oversight over our own thought processes and even social relationships.” What makes many applications of artificial intelligence so disturbing is that they don’t expand our mind’s capacity to think, but outsource it. - Tyler Austin Harper writing in The Atlantic

Performance Ratings Don’t Tell Us What You Think They Do

A significant body of research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance. The effect that ruins our ability to rate others has a name: the Idiosyncratic Rater Effect, which tells us that my rating of you on a quality such as “potential” is driven not by who you are, but instead by my own idiosyncrasies—how I define “potential,” how much of it I think I have, how tough a rater I usually am. This effect is resilient — no amount of training seems able to lessen it. And it is large — on average, 61% of my rating of you is a reflection of me. In other words, when I rate you, on anything, my rating reveals to the world far more about me than it does about you.