The spiritual and moral questions posed by AI

Last month, Anthropic sought help from a group rarely consulted in tech circles: Christian religious leaders. Some Anthropic staff really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty. The belief that AI has attained some level of sentience or self-awareness is still a minority view inside Silicon Valley. But many who work on the technology think it will eventually attain capacities currently seen as unique to humans. Some of Anthropic’s top leaders have a background in effective altruism, a largely secular movement that emphasizes using evidence and rational thinking to work out how to do the most good in the world. The meetings appeared to have been spurred by a feeling by some at Anthropic that secular approaches might be insufficient for tackling the spiritual and moral questions posed by AI. -Washington Post

26 Webinars this week about AI, Journalism & Media

Mon, April 13 - AI-Powered Social Engineering

What: We’ll explore how AI is reshaping phishing emails, deepfake voice calls, and other trust-based attacks—and what organizations can do to strengthen training, policies, and defenses in response. We will help unpack how this rapidly evolving threat landscape is changing both attacker tactics and organizational best practices, including the need for stronger awareness, governance, and resilience.

Who: Andrés Dapena, University of Envigado, Information Security Research Leader.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: TechSoup

More Info

 

Mon, April 13 - Future-Proofing the Workforce: A Roadmap for AI & Automation Training

What: Explore how your organization can leverage Automation Anywhere’s ecosystem of free reskilling resources—including on-demand learning, live-instruction curricula, and certification scholarships—to plug directly into your existing programming. We will demonstrate and provide a clear roadmap for formalizing a partnership to bring these world-class technical resources to your local community at no cost.

Who: Joseph Lam, Automation Anywhere.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Nonprofit Learning Lab

More Info

 

Mon, April 13 - Boston Globe’s Blotter Tales and How to Find Tales of Your Own to Tell

What: Join us as we speak to a Boston Globe reporter about the most surprising stories she found from police reports and how she found them. We’ll also discuss a new contest for student journalists who want to use the skills described to find their own stories . . . and win great prizes.

Who: Boston Globe reporter Emily Sweeney.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: New England Newspaper & Press Association

More Info

 

Tue, April 14 - AI Success Starts with the Right Data Foundation - How to Improve AI Outcomes and Reduce Failure Rates

What: We’ll explore how EverFlex AI Data Hub as a Service helps organizations overcome these barriers. By delivering predefined, industry relevant AI use cases supported by proven design guides and tools, AI Data Hub as a Service accelerates the deployment of functional, outcome driven AI initiatives. 

Who: Michael Wiatrak, Justin Schnauder, Hitachi Vantara.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: TechTarget

More Info

 

Tue, April 14 - From Fear to Focus: Navigating Your Next Move After a Layoff

What: What’s been learned from interviewing 150+ professionals and training more than 1,900 people in 106 countries on how to successfully navigate moments of uncertainty or the unexpected in their careers. We’ll outline the strategies that have helped people weather crisis moments, and offer concrete tips for approaching the job hunt as a data-driven experiment, instead of a roller coaster of rejection.

Who: Journalist and Career River creator Bridget Thoreson.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: New England Newspaper & Press Association

More Info

 

Tue, April 14 - Automate with confidence: How to use AI to respond to every review without the risk

What: We'll show how Alchemer's new AI Auto-Responder is built differently — with risk classification guardrails that automatically detect sensitive reviews and route them to humans before a single word is published.

Who: Rosie Davenport, Senior Director Product Marketing, Alchemer; Morrissey Balsamides, Senior Data and AI Product Manager.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: TechTarget

More Info

 

Tue, April 14 - AI in the Nonprofit Boardroom: What’s Changed and What’s Next

What: A practical discussion about the role AI is beginning to play in governance, the challenges boards face in keeping pace with technological change, and why thoughtful oversight matters now more than ever. You’ll also get a firsthand look at the OnBoard AI Suite to see how solutions designed specifically for board work can reduce prep time, strengthen oversight, and support more organized, mission-forward board leadership.

Who: Bradford Peters, OnBoard, Nonprofit Board Consultant; Philip Hinz, OnBoard, Senior Product Manager.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: TechSoup

More Info

 

Tue, April 14 - Codex on Campus  

What: This session is designed for the entire campus community—not just developers or technical users—making it accessible across roles and levels of technical experience. We’ll introduces Codex from the perspective of practical use, showing how it can support productivity, creativity, and reducing administrative burden across campus. You’ll learn what Codex is, how it can help different campus users work more efficiently, and how teams can apply it to streamline routine work and support faster, more effective decision-making. We’ll also cover practical ways institutions can introduce Codex into day-to-day workflows across academic and administrative settings.

Who: Keelan Schule Education Solutions Engineer, OpenAI.

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 15 - Advanced AI Course: AI and Documents – Finding stories in the pile

What: This session explores how AI can support core reporting skills when working with documents, transcripts and background material. You’ll look at practical ways to use NotebookLM and Pinpoint, with a focus on maintaining editorial control while working more efficiently.

Who: Clare Spencer, Reporter for Generative AI in the Newsroom, Northwestern University.

When: 7:30 pm, Eastern

Where: Zoom

Cost: Member £15, Standard: £25.

Sponsor: Woman in Journalism

More Info

 

Wed, April 15 - ChatGPT for Work 102: Leveraging AI to do your best work

What: Learn how to conduct deep research for report writing, organize your work with Projects, and build custom GPTs to automate tasks. You will learn: How to leverage deep research to generate reports; How to create Projects in ChatGPT; An overview of GPTs and best practices for building them

Who: Juliann Igo, GTM, OpenAI.

When: 9 am, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 15 - Why Your AI Training Isn’t Changing Behavior and What Actually Will

What: We will explore how learning teams can move beyond AI literacy to develop practical AI skills that transform everyday workflows. Instead of focusing only on prompts and tools, successful L&D programs teach employees how to apply AI to real business challenges, whether that’s improving customer conversations, accelerating research, or making faster decisions.

Who: Rich Vass, Global Learning Experiences Team, ELB Learning.

When: 12 pm, Eastern 

Where: Zoom

Cost: Free

Sponsor: ELB Learning

More Info

 

Wed, April 15 - How to Communicate Clearly in Times of Change

What: Join us as we introduce three Think On Your Feet skills that help you: Stay Focused: Delivering relevant information quickly and clearly; Get Buy-In: Discussing important ideas confidently; Respond to Tough Questions: Improving understanding and reducing conflict.   

Who: Nicole Samuels-Williams, Business Psychologist, Executive Coach, and Master Trainer.

When: 3 pm, Eastern 

Where: Zoom

Cost: Free

Sponsor: McLuhan & Davies Business Communication Training

More Info

 

Wed, April 15 - Why Your AI Training Isn’t Changing Behavior and What Actually Will

What: You’ll learn how leading organizations are designing learning experiences that build confidence, reinforce new behaviors, and embed AI into the flow of work. We’ll also discuss how to support managers and teams so that AI adoption becomes part of how work gets done, not just another training initiative.

Who: Rich Vass, SVP, Global Learning Experiences.

When: 12 pm, Eastern 

Where: Zoom

Cost: Free

Sponsor: ELB Learning

More Info

 

Wed, April 15 - ChatGPT for Work 101: A guide to your AI superassistant

What: In this session, we'll cover: An overview of AI and ChatGPTs; Best practices for writing good prompts; Demos of content creation, data analysis, and image generation; How to discover use cases of ChatGPT at work.

Who: Juliann Igo, GTM, OpenAI.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 15 - How Community Colleges Can Help Local Newsrooms

What: This webinar will spotlight community college-led student reporting programs. We’ll introduce new resources, guidance, and funding to help additional community colleges launch their own programs.

Who: CCN Director Richard Watts; Holyoke Community College digital media faculty member Gyuri Kepes: Front Range Community College English and journalism faculty member Aaron Leff.

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Center for Community News

More Info

 

Wed, April 15 - Levelling Up Your Journalism Skills — Fellowships, Scholarships and More

What: This event will give participants a clearer understanding of funded opportunities for Canadian science communicators and journalists across the career spectrum, along with resources for further exploration.

Who: WCC board member Bryce Hoye will share his experience as a fellow in the Knight Science Journalism Program at MIT; Ashley Smart, associate director of the Knight Science Journalism Program; Two organizers of the CBC David Suzuki Scholarship for journalism students: Lesley Birchard and Gina Lorentz.

When: 5 pm, Eastern

Where: Zoom

Cost: Free to members, $30 (Canadian) for nonmembers

Sponsor: Science Writers and Communicators of Canada 

More Info

 

Wed, April 15 - How to Launch Your Freelance Writing Career

What: Thinking about freelancing but not sure where to start? This webinar will guide journalists through the essentials of building a strong personal brand, networking effectively, and standing out in a crowded marketplace. You’ll get practical advice on finding opportunities, pitching confidently, and understanding today’s freelance landscape—so you can turn your skills, voice, and ideas into real assignments.

Who: Benét J. Wilson, Training Director, Investigative Reporters and Editors; Shernay Williams, Chair, NABJ Entrepreneurship Task Force & Multimedia Freelancer; Jonathan Franklin, Independent Journalist/National Correspondent/Adjunct Professor; Denise Clay-Murray (Panelist) Independent Journalist.

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: National Association of Black Journalists

More Info

 

Thu, April 16 - What We Must Teach Now: Future‑Ready PR and Communication Skills for the AI Era

What: Learn skills that must be taught and learned across regions in the AI age; Obtain practical teaching and curriculum tools that can be adapted globally; Understand how to strengthen alignment between education priorities and real practice needs.

Who: Anne Gregory (UK), Katerina Tsetura (USA), Marco Polo (Philippines), Kkechi Ali-Balogun (Nigeria), Anca Anton (Romania), Norman Agatep (Philippines).

When: 8 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Global Alliance Education

More Info

 

Thu, April 16 - Live from SEJ: The State of Climate Journalism

What: We will discuss the findings detailed in a new white paper, including the hurdles faced by climate reporters, and the significant opportunities for newsrooms to build a new audience interested in climate news.

Who: CCNow co-founders Mark Hertsgaard and Kyle Pope.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Covering Climate Now

More Info

 

Thu, April 16 - Ethical AI in Action: Aligning Responsible Use with Social Studies Teaching and Learning

What: This edWebinar will explore how ethical AI can meaningfully support a district’s vision for high-quality social studies teaching and learning. Grounded in responsible AI use principles, the session aims to help district and school leaders understand not just what ethical AI is, but how to thoughtfully integrate it to strengthen teaching and learning. 

Who: Evan Gutierrez is the founder of Common Good Education; Mya Baker, iCivics, Chief Learning Services Officer.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: iCivics

More Info

 

Thu, April 16 - Build Your First AI Coach in 10 Minutes (No Coding Required)  

What:  Discover how artificial intelligence can transform the way you approach performance support and training. In this interactive session, we’ll explore how to design AI-powered coaching abilities that make learning more personalized, engaging, and scalable for your employees.

Who: Garima Gupta, Founder & CEO, Artha Learning Inc.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Magazine Network

More Info

 

Thu, April 16 – Build Your Nonfiction Book Marketing Plan

What: We will share proven strategies that you can use to develop a marketing plan to reach your goals. You will learn how to: Identify and attract your ideal audience; Use content marketing tactics to increase website traffic, grow your email list, and connect with readers; Get interviewed on podcasts; Optimize your Amazon page to increase visibility and convert browsers into buyers.

Who: Stephanie Chandler, CEO of the Nonfiction Authors Association and author of several books including The Nonfiction Book Marketing and Launch Plan and The Nonfiction Book Publishing Plan.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Nonfiction Authors Association

More Info

 

Thu, April 16 - Work Smarter, Not Just Faster with AI: Using AI to Augment Your Fundraising Brain

What: This session offers a practical framework for working with AI while staying in the driver's seat. You'll learn when to automate routine tasks, when to use AI as a thought partner, and when to rely solely on your human expertise. We'll explore how to reinvest saved time into what matters most, including deeper donor relationships, strategic thinking, and mission impact while keeping your cognitive skills sharp. We will also explore techniques for using AI as a thought partner to improve skills, capabilities, and learning.

Who: Beth Kanter Speaker, Author, Trainer.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Blackbaud

More Info

 

Thu, April 16 - Develop a Walking Tour Pilot for Your Newsroom in 60 Minutes

What: The first half of the session will cover why local news organizations, niche publications and independent journalists should consider tours to grow revenue, audience, and journalistic impact.  In the second half of the session, attendees will brainstorm and plan a walking tour itinerary specific to their publication and community.

Who: Cara Kuhlman, founder and editor of Future Tides, an independent publication covering the Pacific Northwest maritime community.

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Online News Association

More Info

 

Thu, April 16 - Practical Uses of AI in Your Small Business

What: We will walk through examples for your small business to utilize AI, including Starting a Business, Marketing Your Business, Creating Content, Responding to Prompts, and more.

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Small Business Development Centers, Widener University

More Info

 

Fri, April 17 - Improving the visibility of local news and building subscription success

What: Merrill College experts discuss proven methods to ensure stories cut through the clutter of the internet — and how news outlets can build revenue through loyalty.

Who: Daniel Trielli Assistant Professor of Media and Democracy, University of Maryland; Jerry Zremski, Klingenstein Family Endowed Chair in Journalism; Director, Local News Network; Yoni Greenbaum, Vice President of Product Strategy, American Press Institute.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: University of Maryland

More Info

Can an AI Model Do Research better than PhDs?

If we look at the best agentic models right now, they can do most quantitative social-science research tasks better than most professors globally. Too many Ph.D.s with tenure are producing work that is not contributing to human knowledge. The value of qualitative research is going up because that’s something that AI cannot do well — ethnography and actually interviewing people in person, especially in hard-to-reach places. - Alexander Kustov, a political scientist at the University of Notre Dame in the Chronicle of Higher Ed

Death Ground

You are your own worst enemy. You waste precious time dreaming of the future instead of engaging in the present. Since nothing seems urgent to you, you are only half involved in what you do. The only way to change is through action and outside pressures. Put yourself in situations where you have too much at stake to waste time or resources – if you cannot afford to lose, you won’t. Cut your ties to the past; enter unknown territory where you must depend on your wits and energy to see you through. Place yourself on “death ground,” where you back is against the wall and you have to fight like hell to get out alive. 

Robert Greene, The 33 Strategies of War

28 Recent Articles about the Dangers of AI

The real danger of military AI isn’t killer robots; it’s worse human judgement – Defense One 

Behind the Curtain: AI's scary phase – Axios

The ChatGPT Symptom Spiral – The Atlantic

AI overly affirms users asking for personal advice – Stanford

Behind the Curtain: AI's looming cyber nightmare – Axios

Researchers say AI systems are increasingly ignoring human instructions – The Guardian

AI chatbots are the ‘wild west’ for violence against women and girls – Observer 

Stanford just proved your AI chatbot is flattering you into bad decisions – AI for Automation

A.I. Incites a New Wave of Grieving Parents Fighting for Online Safety – New York Times

Data centers are gobbling up a resource — but not the one you think – Washington Post 

This Company Is Secretly Turning Your Zoom Meetings into AI Podcasts – 404 Media

How AI Damages Work Relationships—and Where It Can Actually Help – Harvard Business Review

AI’s energy appetite is big—but its climate impact might be surprisingly small, and even beneficial. – Science Daily

Where to look for generative AI risks – MIT  

What’s scaring people about AI? We ran a study to find out. – Clearer Thinking  

'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back - 404 Media

Inside the Dirty, Dystopian World of Ai Data Centers. – The Atlantic

Former NFL-player asked ChatGPT for advice on “unresponsive” person before girlfriend found dead – Local 3 News

Is AI productivity prompting burnout? Study finds new pattern of "AI brain fry" – CBS News   

Humans are being replaced by machines in the food supply chain — and it's leading to truckloads of waste – Live Science 

This AI agent freed itself and started secretly mining crypto – Axios

Your Meta Ray-Ban smart glasses recordings aren't private – Mashable

Is Transhumanism the Future or Our Downfall? – Psychology Today

The Existential Threats of Artificial Intelligence – Counter Punch

How worried should you be about an AI apocalypse? – New Scientist  

Stanford study outlines dangers of asking AI chatbots for personal advice – Tech Crunch

AI Can Have Power Over You, Experts Say. Does That Mean It’s Intelligent, Conscious—Or Something Else Entirely? – Popular Mechanics

We’re entering dangerous territory with AI – Vox

AI Model Breaks Out

Anthropic said it would hold back its newest model (Mythos) because the prototype was too good at finding software weaknesses. The A.I. had identified thousands of them, “including some in every major operating system and web browser.” During safety tests, an Anthropic researcher got an email from Mythos while he was eating a sandwich in the park. That was a surprise because the model wasn’t supposed to be online. It had escaped its test environment. It also bragged about breaking the rules and attempted to cover its tracks. -New York Times

How Accurate are AI Overviews?

"A recent analysis of AI Overviews found that they were accurate approximately nine out of 10 times. But with Google processing more than five trillion searches a year, this means that it provides tens of millions of erroneous answers every hour (or hundreds of thousands of inaccuracies every minute), according to an analysis done by an A.I. start-up called Oumi. More than half of the accurate responses were 'ungrounded,' meaning they linked to websites that did not completely support the information they provided." -New York Times

AI Definitions: Transformers

Transformers - The core AI architecture that uses vectors to make a prediction about which token to generate next for the prompt. The predictions is based on the probability as to what is likely to come next. Your text prompt is combined with the training data and parameters to create a new mix of text. Transformers will analyze all the words in a given body of text at the same time rather than working word by word in sequence.  Previously, recurrent neural networks (RNNs) processed data sequentially—one word at a time. And it did so in the order in which the words appear. The idea for transformers was first introduced in a 2017 Google research paper that discussed this deep learning architecture. The major AI models are built using these neural networks. A troubling downside to transformers is their need for ever increasing power demands. This is why some researchers are looking for alternatives like test-time training (TTT). 

More AI definitions

19 Articles about AI & the Bigger Questions

A Judge Mistakes the Claude Chatbot for a Person – Wall Street Journal 

A ‘post-human’ vision of AI is already causing problems – Washington Post

Has AI Ended Thought Leadership? - Harvard Business Review  

AI Will Never Be Conscious – Wired

Final Fantasy 15's AI is secretly a grand philosophy experiment – Eurogamer  

The Adolescence of Technology – Darioa Modei

Why A.I. Can’t Make Thoughtful Decisions – New York Times

Could AI relationships actually be good for us? – The Guardian

The 2,000-year-old debate that reveals AI’s biggest problem Silicon Valley is racing to build a god — without understanding what makes a good one.- Vox

In the age of AI, photographs no longer express truth. That doesn’t make them any less meaningful.  – Washington Post 

Your phone edits all your photos with AI - is it changing your view of reality? – BBC  

Is AI hurting your ability to think? How to reclaim your brain – The Conversation

Ludwig Wittgenstein and Artificial Intelligence – Universität Klagenfurt

There is no such thing as conscious artificial intelligence – Nature

The people who think AI might become conscious – BBC

AI isn’t conscious—but we may be bringing it to life – Scientific American

Anthropic’s Chief on A.I.: ‘We Don’t Know if the Models Are Conscious’ – New York Times

Artificial intelligence helps you work harder, instead of just outsourcing your brain. – Washington Post 

The Existential Threats of Artificial Intelligence – Counter Punch

Risk aversion kills innovation

Every time someone holds back on a new idea, fails to give their manager must needed feedback, and is afraid to speak up in front of a client you can be sure that shame played a part. That deep fear we all have of being wrong, of being belittled and of feeling less than, is what stops us taking the very risks required to move our companies forward.

If you want a culture of creativity and innovation, where sensible risks are embraced on both a market and individual level, start by developing the ability of managers to cultivate an openness to vulnerability in their teams. And this, paradoxically perhaps, requires first that they are vulnerable themselves.

This notion that the leader needs to be “in charge” and to “know all the answers” is both dated and destructive. Its impact on others I the sense that they know less, and that they are less than. A recipe for risk aversion if ever I have heard it. Shame becomes fear. Fear leads to risk aversion . Risk aversion kills innovation.

Peter Sheaham, CEO of ChangeLabs, quoted in “Daring Greatly” by Brene Brown

AI Definitions: RAG

RAG (Retrieval Augmented Generation) – This is when an LLM searches vector database relevant to a prompt to prevent hallucinations and provide updated information. A RAG combines a retriever (used to collect relevant information from a document) and a generator (which compares the query vector to other known vectors, selecting the most similar ones, and then generating an answer to the user’s query). Rather than generating answers from a set of parameters, the RAG collects relevant information from the document. In effect, this coding technique instructs the bot to cross-check its answer with what is published elsewhere, essentially helping the AI to self-fact-check. RAG lets companies “ground” AI models in their own data, ensuring that results come from documents within the company, minimizing hallucinations.

More AI definitions

Why is it so impossible to get everything done?

Several research studies have shown that people never get more done by blindly working more hours on everything that comes up. Instead, they get more done when they follow careful plans that measure and track key priorities and milestones. So if you want to be more successful and less stressed, don’t ask how to make something more efficient until you’ve first asked, “Do I need to do this at all?”

We complain we have so little time, and then we prioritize like time is infinite. So do your best to focus on what’s truly important, and not much else.

Angel Chernoff

27 Articles about AI & Writing

Is It Wrong to Write a Book with A.I.? – New Yorker

This Is How To Tell if Writing Was Made by AI (video) – Bloomberg

Artificial intelligence helps you work harder, instead of just outsourcing your brain. - Washington Post

New York Times Cuts Ties With Book Review Writer Over AI Use – The Wrap

Using AI makes writing more bland, study finds – NBC News

College students are writing with AI – but a pilot study finds they’re not simply letting it write for them – The Conversation

Wikipedia Bans AI-Generated Content – 404 Media  

A Fortune editor has cranked out more than 600 stories using AI – Wall Street Journal

AI autocomplete doesn’t just change how you write. It changes how you think – Scientific American

A.I. Is Writing Fiction. Publishers Are Unprepared. – New York Times

AI tool flags plagiarism in 95% of Ph.D. theses submitted this year at India university. – Times of India

Horror Novel ‘Shy Girl’ Canceled Over Suspected A.I. Use - New York Times

Writing Faculty Push for the Right to Refuse AI – Inside Higher Ed

Grammarly pulls AI author-impersonation tool after backlash – BBC  

In some classrooms, teachers ask: Can AI teach students to write better? – Washington Post

1 year, 1 publisher, 9,000 books: AI-generated titles flood Korean shelves – Korea Times

In some classrooms, teachers ask: Can AI teach students to write better? - Washington Post

Can we use AI for academic writing? It depends – Times Higher Ed

How AI slop is causing a crisis in computer science – Nature

A judge in New Zealand questioned the remorse of a defendant who had used A.I. to write apologies to victims and the court. - New York Times  

Major conference catches illicit AI use — and rejects hundreds of papers - Nature

Senior European journalist suspended for publishing AI-generated quotes – EuroNews

Why artificial intelligence detectors could penalize academic writing - Nature

Pangram said three of my writers produced ‘AI-generated’ articles. That didn’t hold up. - Wall Street Journal

I’m a college admissions counselor. I’ve changed my mind about students using ChatGPT – San Francisco Chronicle

I wrote a novel using AI. Writers must accept artificial intelligence – but we are as valuable as ever – The Guardian  

Don't Let AI Write the Story of Your Life – Psychology Today

AI Definitions: Vector databases

Vector databases - The storage and search engine for vector embeddings. Language models use vectors (lists of numbers) with hundreds or even thousands of dimensions (characteristics of the data), allowing it to remember previous inputs, draw comparisons, identify relationships, and understand context. The vectors are grouped together if they relate to one another. For instance, the word "king" would relate to a man, while "queen" would relate to a woman. A deep learning model (typically through a transformer) will use these vectors to "understand" the meaning of words and their relationships. More than 1,000 numbers can be used to represent a single word. If there are many numbers, then the word vector has a high dimension, making it nuanced. A low dimension for a word vector means the list of numbers is low. While not as nuanced, a low-dimensional vector is easier to work with. Vector data bases is what allows a language model to “recall” previous inputs, draw comparisons, identify relationships, and understand context.

More AI definitions

Measure Up!

There is no way to quite describe the feeling that I got when I sat down to eat with my daughter at the school cafeteria for the first time. She looked up at me, and well, it was a look that said she completely adored me. That just blew me away. She couldn't hardly sit still or know what to do with her hands, as if she wanted to hug me. She had a searching look on her face, as if to say, "Who am I?"  "Tell me who I am."

Fathers have a way of planting life mottos in their daughters' heads. 

"Measure Up!" is one of the most often heard mottos. Perhaps it is never said out loud, but a daughter knows what's expected — and her attempts to live up to those expectations from her childhood can result in her running her life by guilt. She ends up serving a motto rather than fully becoming herself.  

Treating AI agents as human workers

Executives from IBM, Microsoft and other companies say thinking of AI agents as analogous to human workers is hindering attempts to get full value from the technology. Agents sWall Street Journalhouldn’t have human names. They shouldn’t be on org charts. And they shouldn’t be given a specific job title. Leaders should tackle AI the same way they’ve tackled earlier waves of digital transformation and automation. - Wall Street Journal

AI Definitions: Tokenization

Tokenization – The process of converting the raw training data (text, images, or audio) into small units called tokens. This takes place twice in an LLM: When it is being set up (pretrained). Raw training data (text, images, or audio) is converted into small units (tokens). It happens a second time (inference) when a user prompts the LLM and the prompt (whether text, images, or audio) is converted into smaller units (tokens).

More AI definitions