Face to Face
/People will do anything, no matter how absurd, in order to avoid facing their own soul. -Carl Jung
People will do anything, no matter how absurd, in order to avoid facing their own soul. -Carl Jung
What: Want to learn everything you need to know about service journalism? This workshop is about finding, reporting, writing, and pitching better service stories.
Who: Tim Herrera, former editor of NYT's service desk Smarter Living for a wide-ranging discussion on all things service journalism.
When: 2 pm, Pacific
Where: Zoom
Cost: Free
Sponsor: Freelancing With Tim
What: The discussion will look at how news organizations with journalists from many different countries share and enforce basic rules of journalism ethics that we can all agree to.
Who: Fred Brown Chair of the SPJ Professional Standards and Ethics Committee and former Denver Post editor and columnist; Kathy English, former Toronto Star public editor/ombudsman and current chair of the Canadian Journalism Foundation; Steven Springer, former editor for standards and best practices at Voice of America and Eric Wishart, Standards and Ethics Editor for AFP.
When: 7pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Society of Professional Journalists, Washington, DC chapter
What: Adobe’s suite of creative software incorporating AI, the company’s work to tackle misinformation and the balance between innovation and risk with the advent of new technologies.
Who: Shantanu Narayen, chair and CEO of Adobe
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Washington Post
Who: Tapp Network’s Jon Hill and web developer Tareq Monaur.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: TechSoup
What: We'll cover creating a presentation from scratch or from a document, editing a presentation, getting help from Copilot, including how to do a task in PowerPoint, summarizing a presentation, and finding content Best practices for getting the best results, including prompt engineering What Copilot can't do (at least not yet) and how to give Microsoft feedback.
Who: Ellen Finkelstein, President, Ellen Finkelstein Inc.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Magazine Network, Presentation Guild
What: The nuances of color grading, demonstrating how this essential process can transform the visual appeal and emotional impact of your digital content.
Who: Gabriela Fialova, Coordinator of Digital Media; David Ziegler, Media and Post-Production Specialist.
When: 6 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center Kutztown University of Pennsylvania
What: An immersive session exploring how Artificial Intelligence is revolutionizing the landscape of content creation. Discover cutting-edge tools and techniques that leverage AI to streamline video and audio production processes. We will guide you through hands-on demonstrations, showcasing the seamless integration of AI in video and audio projects. Learn how AI can enhance creativity, automate repetitive tasks, and open new frontiers in storytelling. Whether you’re a content creator, podcaster, or just an entrepreneur who lacks creativity, this webinar is tailored to help you harness the power of AI for captivating and innovative projects.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center Kutztown University of Pennsylvania
What: An off-the-record virtual panel discussion explaining the PRESS Act, and why reporters and editorial boards should cover it. The panel will be followed by an on-the-record Q&A session with audience members.
Who: Alex Bertschi Wrigley, legislative assistant for Sen. Ron Wyden; Fred Brown, chair of the ethics committee of the Society of Professional Journalists and former journalist for The Denver Post; Caitlin Vogus, deputy director of advocacy at Freedom of the Press Foundation; Larry Wilson, member of the Southern California News Group editorial board.
When: 12 noon, Eastern
Where: Zoom
Cost: Free
Sponsor: Freedom of the Press Foundation and the Society of Professional Journalists
What: From doxing to hacking, journalists around the world are subject to online harassment and abuse every day. While some of these attacks have been aimed at political journalists and those working to hold power to account, all reporters need to be on the alert for potential online harassment and attacks. This webinar will offer instruction on how to minimize your personal risk of online harassment and protect your private information.
Who: David Huerta, senior digital security instructor for the Freedom of the Press Foundation.
When: 11:30 am
Where: Zoom
Cost: Free
Sponsor: National Press Club Journalism Institute
Think left and think right and think low and think high. Oh, the thinks you can think up if only you try! Dr. Seuss (born: March 2, 1904)
SEC Investigating Whether OpenAI Investors Were Misled – Wall Street Journal
Google pauses Gemini’s ability to generate AI images of people after diversity errors – The Verge
Tech companies go dark about AI advances. That’s a problem for innovation. - Semafor
OpenAI Develops Web Search Product in Challenge to Google – The Information
Google’s AI now goes by a new name: Gemini - The Verge
OpenAI is set to hit $2 billion in revenue — and fast - Quartz
AI companies agree to limit election ‘deepfakes’ but fall short of ban – Washington Post
The AI Industry Is Stuck on One Very Specific Way to Use a Chatbot: Travel Plans – The Atlantic
Amazon AGI team say their AI is showing "emergent abilities" – Futurism
Nvidia Declares AI a ‘Whole New Industry’—and Investors Agree – Wall Street Journal
Google Is Giving Away Some of the A.I. That Powers Chatbots Like Meta – New York Times
Expecting the best means that you put your whole heart (i.e., the central essence of your personality) into what you want to accomplish. People are defeated in life not because of lack of ability, but for lack of wholeheartedness. They do not wholeheartedly expect to succeed. Their heart isn’t in it, which is to say they themselves are not fully given. Results do not yield themselves to the person who refuses to give himself to the desired results.
A major key to success in this life, to attaining that which you deeply desire, is to be completely released and throw all there is of yourself into your job or any project in which you are engaged. In other words, whatever you are doing, give it all you’ve got.
A famous Canadian athletic coach, Ace Percival, says that most people, athletes as well as non-athletes, are “holdouts,” that is to say, they are always keeping something in reserve. They do not invest themselves 100 percent in competition. Because of that fact, they never achieve the highest of which they are capable.
Norman Vincent Peale, The Power of Positive Thinking
Generative AI can improve -- not replace -- predictive analytics
China is building its own Starlink—even as questions surround Musk's constellation
Understanding transformers, how they've advanced LLMs—and what may replace them
Scale AI to set the Pentagon’s path for testing and evaluating large language models
How to solve binary classification problems using Bayesian methods in Python
A quick rundown of the impact AI will have on data roles across the organization
Some of the top R packages every data scientist should be familiar with them
Python Libraries for Geospatial Data Visualization: Transform Your Maps into Stories
A list of premier YouTube channels exploring large language models
Python code commenting as a data scientist
New intelligence related to Russia’s attempts to develop a space-based antisatellite nuclear weapon
10 Prominent Data Science Predictions 2024
Tech Companies turned Ukraine into an AI War Lab
The pace of innovation in the space sector is picking up thanks in part to AI and machine learning
What a data scientist looks like in 2032 is likely to be starkly different than today
What an AI-powered future of data science looks like
Sony AI’s tech predictions for the year ahead
10 emerging data science trends
An empirical analysis about whether ML models make more mistakes when making predictions on outliers
Research reveals that the more people think they know about a topic in general, the more likely they are to allege knowledge of completely made-up information and false facts, a phenomenon known as "overclaiming." The findings are published in Psychological Science, a journal of the Association for Psychological Science.
In one set of experiments, the researchers tested whether individuals who perceived themselves to be experts in personal finance would be more likely to claim knowledge of fake financial terms.
As expected, people who saw themselves as financial wizards were most likely to claim expertise of the bogus finance terms.
"The more people believed they knew about finances in general, the more likely they were to overclaim knowledge of the fictitious financial terms," psychological scientist Stav Atir of Cornell University, first author on the study, says. "The same pattern emerged for other domains, including biology, literature, philosophy, and geography."
"For instance," Atir explains, "people's assessment of how much they know about a particular biological term will depend in part on how much they think they know about biology in general."
In another experiment, the researchers warned one set of 49 participants that some of the terms in a list would be made up. Even after receiving the warning, the self-proclaimed experts were more likely to confidently claim familiarity with fake terms.
from Science Daily
Judge Blasts Law Firm for using ChatGPT to Estimate Legal Costs – Futurism
AI Use in Law Practice Needs Common Sense, Not More Court Rules – Bloomberg
How Generative AI's Growing Memory Affects Lawyers – Law 360
China court says AI broke copyright law in apparent world first – Semafor
Generative AI in the legal industry: The 3 waves set to change how the business works – Reuters
Harvard Law Expert Explains How AI my Transform the Legal Profession in 2042 – Harvard Law School
How Artificial Intelligence is making its way into the legal system – The Marshall Project
AI Will Soon Streamline Litigation Practice for Patent Attorneys – Bloomberg
Chief Justice Roberts casts a wary eye on artificial intelligence in the courts - NPR
AI’s Billion-Dollar Copyright Battle Starts With a Font Designer – Bloomberg
Boom in A.I. Prompts a Test of Copyright Law – New York Times
The New York Times’s OpenAI lawsuit could put a damper on AI’s 2024 ambitions – Fast Company
OpenAI Pleads That It Can’t Make Money Without Using Copyrighted Materials for Free – Futurism
What If We Held ChatGPT to the Same Standard as Claudine Gay? The problem with generative AI is plagiarism, not copyright – The Atlantic
The New York Times’ Copyright Lawsuit Against OpenAI Threatens the Future of AI and Fair Use – Data Innovation
We Asked A.I. to Create the Joker. It Generated a Copyrighted Image. – New York Times
Expect the best, plan for the worst, and prepare to be surprised. -Denis Waitley
As we hit the 20-year anniversary of Facebook, we’re finding that social media usage is changing in a fundamental way. The platforms are evolving:
from displaying personal information publicly (“Here’s where I went on vacation”; “This is the food I ate at a fancy restaurant.”)
to a place to watch and listen to curated content (often resembling TV and streaming in short form)
Curated & Closed
Instead of status updates, there are algorithmically curated videos. Many of the users who were creating and posting are now just consuming—at least, in the public sphere. This is particularly pronounced among first-gen social media users, that is, millennials between the ages of 27 and 42. This is why Instagram has seem the most growth in the last five years in DMs and stories limited to friends. The type of content they used to share in public posting is moving into private messaging and closed groups.
The advantage of closed groups is:
Greater privacy
Less sensationalism
Improved mental health of users
The downside of closed groups includes:
The lack of moderation
The spread of misinformation
The spread of new ideas suffers
The support of news outlets weakens
Social media is becoming less social. There is less emphasis on connections and greater focus on individual consumption of media produced by content creators. This focus toward engagement amplifies extreme content, which (among other things) hinders the sharing of actual news content and accurate information.
Read more:
The end of the social network – The Economist
People are posting a lot less on public social media – Fortune
First-Gen Social Media Users Have Nowhere to Go – Wired
Why the Internet isn’t Fun Anymore – The New Yorker
Chuck Close said, “Inspiration is for amateurs. Us professionals, we just go to work in the morning.” One thing I really love about that quote is it relieves you a lot of pressure. It’s not about waiting for hours for this moment where inspiration strikes. It’s just about showing up and getting started. All that matters is that you enable the chance for something amazing to happen.
Christoph Niemann
Could AI Disrupt Peer Review? Publishers’ policies lag technological advances - Spectrum
The Use of Artificial Intelligence in Writing Scientific Review Articles - Springer
‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud - Nature
AI could accelerate scientific fraud as well as progress - Economist
Researchers plan to release guidelines for the use of AI in publishing - Chemical & Engineering News
ChatGPT use shows that the grant-application system is broken - Nature
Detecting fraud in scientific publications: the perils and promise of AI - Science Pod
The Science family of journals is adopting the use of Proofig, an artificial intelligence (AI)–powered image-analysis tool- Science Magazine
Can ChatGPT and Other AI Bots Serve as Peer Reviewers? - ACS Publishing
AI Use in Manuscript Preparation for Academic Journals - Cornell University
As scientists face a flood of papers, AI developers aim to help New tools show promise, but technical and legal barriers may hinder widespread use - Science Magazine
Is AI leading to a reproducibility crisis in science? – Nature
Affiliation Bias in Peer Review of Abstracts by a Large Language Mode - JAMA
AI copilots and robo-labs turbocharge research - Axios
Editing companies are stealing unpublished research to train their AI - Times Higher Ed
How journals are fighting back against a wave of questionable images - Nature
Can ChatGPT evaluate research quality? - Cornell University
The JSTOR Daily Sleuth - Jstor
The greatest happiness of life is the conviction that we are loved -loved for ourselves, or rather, loved in spite of ourselves. -Victor Hugo (born Feb. 26, 1802)
Two journalists talk to the bots — who talk back — about the pros and pitfalls of AI - Nieman Labs
What will be the impact of generative AI on journalism? – Reuters
TikTok dominates media outlets as news source for Gen Z - Axios
Vice Media to Stop Publishing on Vice.com, Plans to Cut Hundreds of Jobs – Wall Street Journal
How OpenAI’s new text-to-video tool, Sora, could harm journalism and society - Poynter
Semafor reporters are going to curate the news with AI – The Verge
AI and Journalism Need Each Other – WSJ
How less, not more, data, could help journalism – Semafor
News Publishers See Google’s AI Search Tool as a Traffic-Destroying Nightmare - WSJ
AI may be news reporting’s future. So far, it’s been an embarrassment. - Washington Post
Can news outlets build a ‘trustworthy’ AI chatbot? - The Verge
How to report on AI in elections - International Journalists' Network - International Center For Journalists
How Reuters, Newsquest and BBC experiment with generative AI – Journalism.co
Google News Is Boosting Garbage AI-Generated Articles – 404 Media
Experts Warn Congress of Dangers AI Poses to Journalism - TIME
The New York Times is building a team to explore AI in the newsroom - The Verge
New York Times Sues Microsoft and OpenAI, Alleging Copyright Infringement – WSJ
I created an AI tool to help investigative journalists find stories in audit reports - Reuters
The AI Revolution in Journalism: A New Era of Enhanced Reporting - Hackernoon
How The Generative AI Boom Proves We Need Journalism - AdExchanger
AI is a big opportunity for the news media. Let’s not blow it. - Columbia Journalism Review
Little Lies
Small, self-serving lies are likely to progress to bigger falsehoods, and over time, the brain appears to adapt to the dishonesty, according to a new study.
The finding, the researchers said, provides evidence for the “slippery slope” sometimes described by wayward politicians, corrupt financiers, unfaithful spouses and others in explaining their misconduct.
“They usually tell a story where they started small and got larger and larger, and then they suddenly found themselves committing quite severe acts,” said Tali Sharot, an associate professor of cognitive neuroscience at University College London. She was a senior author of the study.
Erica Goode writing in the New York Times
Happiness is a perfume you cannot pour on others without getting a few drops on yourself. - Og Mandino
Top AI Companies Join Government Effort to Set Safety Standards - TIME
Medical AI Tools Can Make Dangerous Mistakes. Can the Government Help Prevent Them? – Wall Street Journal
Regulate AI? Here’s What That Might Mean in the US – Washington Post
How AI is quietly changing everyday life And what Washington is doing about it behind the scenes. - Politico
As gen AI advances, regulators—and risk functions—rush to keep pace – McKinsey
AI lobbying spikes 185% as calls for regulation surge – CNBC
It’s Time for the Government to Regulate AI. Here’s How. - Politico
White House vies for global leadership on AI governance - Washington Post
AI Voice Robocalls Banned by Federal Communications Enforcer – Bloomberg
Biden’s Elusive AI Whisperer Finally Goes On the Record. Here’s His Warning. - Politico
Biden signs AI executive order, the most expansive regulatory attempt yet – Washington Post
AI can stop government from growing, and that’s a good thing – The Hill
U.S. Government Uses for Artificial Intelligence – Investopedia
New Laws to Regulate AI Would Be Premature - Washington Post
In 1974, Elizabeth Loftus at the University of Washington conducted a study in which people watched films of car crashes. She then asked the participants to estimate how fast the cares were going, but she divided the people into groups and asked the question differently for each.
The word changes included: smashed, collided, bumped, hit, and contacted.
Just by changing the wording, the memories of the subjects were altered. Loftus raised the ante by asking the same people if they remembered the broken glass in the film. There was no broken glass, but sure enough the people who were given the word “smashed” in their question were twice as likely to remember seeing it.
Since then, hundreds of experiments into the misinformation effect have been conducted, and people have been convinced of all sorts of things. Screwdrivers become wrenches, white men become black men and experiences involving other people get traded back and forth.
Memory is imperfect, but also constantly changing. Not only do you filter your past through your present, but your memory is easily infected by social contagion. You incorporate the memories of others into your own head all the time. Studies suggest your memory is permeable, malleable, and evolving. It isn’t fixed and permanent, but more like a dream that pulls information about what you are thinking about during the day and adds new details to the narrative.
David McRaney, You are Not so Smart
Generation GPT: What Gen Z really thinks about ‘world-changing’ AI – Washington Post
9 AI Tools For College Students That’ll Make Your Life So Much Easier – Her Campus
My 5 favorite AI tools for school: Class is in session, and generative AI can help – ZDnet
Nearly half of college students are using AI tools this fall, but fewer than a quarter of faculty members use them – Inside Higher Ed
Artificial Intelligence: A Graduate-Student User’s Guide – Chronicle of Higher Ed
Applying to College? Here’s How A.I. Tools Might Hurt, or Help. – New York Times
Turns out that students, not teachers, are the bigger skeptics when it comes to using ChatGPT - Ed Week
Students can quote ChatGPT in essays as long as they do not pass the work off as their own, international qualification body says – Business Insider
AI bots can seem sentient. Students need guardrails - Inside Higher Ed
Cheating Fears Over Chatbots Were Overblown, New Research Suggests - New York Times
Can ChatGPT get into Harvard? We tested its admissions essay - Washington Post
Your classmate could be an AI student at this Michigan university – Futurism
Surprise! AI chatbots don't increase student cheating afterall, new research finds - ZDnet
Survey: College students' thoughts on AI and careers – Inside Higher Ed
What Students Are Saying About Learning to Write in the Age of A.I. - New York Times
The little unremembered acts of kindness and love are the best parts of a person's life. - William Wordsworth
Becoming is a service of Goforth Solutions, LLC / Copyright ©2025 All Rights Reserved