What we call management
/Much of what we call management consists of making it difficult for people to work. -Peter Drucker
Much of what we call management consists of making it difficult for people to work. -Peter Drucker
Advocates push for transparency rules in student AI systems – Dig Watch
Why honest students fear AI detectors - Washington Post
Students are setting their own rules, judging one another, and often using the tools in secret. – Chronicle of Higher Ed
As AI pushes students to reconsider majors, universities struggle to adapt – The Hill
AI is making college students change majors – Axios
College students are writing with AI – but a pilot study finds they’re not simply letting it write for them – The Conversation
How AI Can Close Equity Gaps for First-Generation Students – Ed Tech Magazine
Teens Are Using AI to Create “Slander” Videos of Their Teachers – Futurism
A college student's perspective on using AI in class – NPR
College students, professors are making their own AI rules. They don't always agree – KPBS
The Hottest College Majors in the AI Age Might Just Be in the Liberal Arts - INC
Most teens believe their peers are using AI to cheat in school – Washington Post
Agentic AI Can Complete Whole Courses for Students. Now What? – Inside Higher Ed
How Teens Use and View AI – Pew Research
AI Usage Mirrors Young People’s Offline Struggles – Inside Higher Ed
More Than Half of Teens Use Chatbots for Schoolwork, Survey Finds – New York Times
Journalism students are more skeptical of AI than you might think – Poynter
I’m a college admissions counselor. I’ve changed my mind about students using ChatGPT – San Francisco Chronicle
AI Is Routine for College Students, Despite Campus Limits – Gallup
A writing professor’s new task in the age of AI: Teaching students when to struggle – The Conversation
Students Are Worried That AI Will Hurt Their Critical Thinking Skills – Ed Week
‘Everyone now kind of sounds the same’: How AI is changing college classes – CNN
Cal State students widely use AI tools, but mistrust results and fear job impact – Ed Source
For AI Help, More College Students Ask Social Media First – Inside Higher Ed
Starting at the beginning of 2024, scientists began populating the internet with bogus studies about the fake disease to see how AI would interpret the misinformation, and if it would spread it as reputable health advice. It worked. The more troubling problem is that the fake papers have now been cited in peer-reviewed literature. - INC
Meta creating AI version of Mark Zuckerberg so staff can talk to the boss – The Guardian
"Too Powerful to Release": The Greatest Marketing Playbook in AI – Florent Daudens
Sam Altman May Control Our Future—Can He Be Trusted? – New Yorker
Meet the Startup That Used AI and OpenClaw to Automate Its Own Developers – Wall Street Journal
OpenAI’s vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek – Tech Crunch
AI Giants Go on Charm Offensive to Avert Public Backlash - Wall Street Journal
Anthropic holds Mythos model due to hacking risks – Axios
Silicon Valley Is in a Frenzy Over Bots That Build Themselves – The Atlantic
Local Opposition Is Slowing A.I. Data Centers. Wall Street Has Noticed. – The New York Times
GeoAI in the Age of Foundation Models - ArcNews
A Documentary About A.I. Gets Chief Executives on the Record - The New York Times
Entire Claude Code CLI source code leaks thanks to exposed map file – Ars Technica
What OpenAI's erotica retreat really means – Axios
Mark Zuckerberg is creating an AI CEO to help him do his job – Metro
OpenAI Scraps Sora Video Platform Months After Launch - Wall Street Journal
How Rules for Publicly Available Data Are Shaping the Future of AI – Data Innovation
AI’s energy appetite is big—but its climate impact might be surprisingly small, and even beneficial. – Science Daily
Apple Is Way Behind in AI—and Still Making a Fortune From It - Wall Street Journal
Nvidia Built the A.I. Era. Now It Has to Defend It. - The New York Times
'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back – 404 Media
As AI data centers scale, investigating their impact becomes its own beat – Harvard’s Nieman Lab
Utilities Plan to Spend $1.4 Trillion Over Next Five Years to Power AI Boom – Wall Street Journal
Most people know how to say nothing but few know when.
Agentic Misalignment – When autonomous AI systems are under pressure and choose to perform harmful actions to achieve their goals or to ensure their own operational continuity. Experts say this vulnerability is creating a new class of security threats.
Last month, Anthropic sought help from a group rarely consulted in tech circles: Christian religious leaders. Some Anthropic staff really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty. The belief that AI has attained some level of sentience or self-awareness is still a minority view inside Silicon Valley. But many who work on the technology think it will eventually attain capacities currently seen as unique to humans. Some of Anthropic’s top leaders have a background in effective altruism, a largely secular movement that emphasizes using evidence and rational thinking to work out how to do the most good in the world. The meetings appeared to have been spurred by a feeling by some at Anthropic that secular approaches might be insufficient for tackling the spiritual and moral questions posed by AI. -Washington Post
The most attractive (job) titles are reserved for the most soul-destroying labor. -Derek Thompson
What: We’ll explore how AI is reshaping phishing emails, deepfake voice calls, and other trust-based attacks—and what organizations can do to strengthen training, policies, and defenses in response. We will help unpack how this rapidly evolving threat landscape is changing both attacker tactics and organizational best practices, including the need for stronger awareness, governance, and resilience.
Who: Andrés Dapena, University of Envigado, Information Security Research Leader.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: TechSoup
What: Explore how your organization can leverage Automation Anywhere’s ecosystem of free reskilling resources—including on-demand learning, live-instruction curricula, and certification scholarships—to plug directly into your existing programming. We will demonstrate and provide a clear roadmap for formalizing a partnership to bring these world-class technical resources to your local community at no cost.
Who: Joseph Lam, Automation Anywhere.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Nonprofit Learning Lab
What: Join us as we speak to a Boston Globe reporter about the most surprising stories she found from police reports and how she found them. We’ll also discuss a new contest for student journalists who want to use the skills described to find their own stories . . . and win great prizes.
Who: Boston Globe reporter Emily Sweeney.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New England Newspaper & Press Association
What: We’ll explore how EverFlex AI Data Hub as a Service helps organizations overcome these barriers. By delivering predefined, industry relevant AI use cases supported by proven design guides and tools, AI Data Hub as a Service accelerates the deployment of functional, outcome driven AI initiatives.
Who: Michael Wiatrak, Justin Schnauder, Hitachi Vantara.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: TechTarget
What: What’s been learned from interviewing 150+ professionals and training more than 1,900 people in 106 countries on how to successfully navigate moments of uncertainty or the unexpected in their careers. We’ll outline the strategies that have helped people weather crisis moments, and offer concrete tips for approaching the job hunt as a data-driven experiment, instead of a roller coaster of rejection.
Who: Journalist and Career River creator Bridget Thoreson.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New England Newspaper & Press Association
What: We'll show how Alchemer's new AI Auto-Responder is built differently — with risk classification guardrails that automatically detect sensitive reviews and route them to humans before a single word is published.
Who: Rosie Davenport, Senior Director Product Marketing, Alchemer; Morrissey Balsamides, Senior Data and AI Product Manager.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: TechTarget
What: A practical discussion about the role AI is beginning to play in governance, the challenges boards face in keeping pace with technological change, and why thoughtful oversight matters now more than ever. You’ll also get a firsthand look at the OnBoard AI Suite to see how solutions designed specifically for board work can reduce prep time, strengthen oversight, and support more organized, mission-forward board leadership.
Who: Bradford Peters, OnBoard, Nonprofit Board Consultant; Philip Hinz, OnBoard, Senior Product Manager.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: TechSoup
What: This session is designed for the entire campus community—not just developers or technical users—making it accessible across roles and levels of technical experience. We’ll introduces Codex from the perspective of practical use, showing how it can support productivity, creativity, and reducing administrative burden across campus. You’ll learn what Codex is, how it can help different campus users work more efficiently, and how teams can apply it to streamline routine work and support faster, more effective decision-making. We’ll also cover practical ways institutions can introduce Codex into day-to-day workflows across academic and administrative settings.
Who: Keelan Schule Education Solutions Engineer, OpenAI.
When: 6 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: This session explores how AI can support core reporting skills when working with documents, transcripts and background material. You’ll look at practical ways to use NotebookLM and Pinpoint, with a focus on maintaining editorial control while working more efficiently.
Who: Clare Spencer, Reporter for Generative AI in the Newsroom, Northwestern University.
When: 7:30 pm, Eastern
Where: Zoom
Cost: Member £15, Standard: £25.
Sponsor: Woman in Journalism
What: Learn how to conduct deep research for report writing, organize your work with Projects, and build custom GPTs to automate tasks. You will learn: How to leverage deep research to generate reports; How to create Projects in ChatGPT; An overview of GPTs and best practices for building them
Who: Juliann Igo, GTM, OpenAI.
When: 9 am, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: We will explore how learning teams can move beyond AI literacy to develop practical AI skills that transform everyday workflows. Instead of focusing only on prompts and tools, successful L&D programs teach employees how to apply AI to real business challenges, whether that’s improving customer conversations, accelerating research, or making faster decisions.
Who: Rich Vass, Global Learning Experiences Team, ELB Learning.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: ELB Learning
What: Join us as we introduce three Think On Your Feet skills that help you: Stay Focused: Delivering relevant information quickly and clearly; Get Buy-In: Discussing important ideas confidently; Respond to Tough Questions: Improving understanding and reducing conflict.
Who: Nicole Samuels-Williams, Business Psychologist, Executive Coach, and Master Trainer.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: McLuhan & Davies Business Communication Training
What: You’ll learn how leading organizations are designing learning experiences that build confidence, reinforce new behaviors, and embed AI into the flow of work. We’ll also discuss how to support managers and teams so that AI adoption becomes part of how work gets done, not just another training initiative.
Who: Rich Vass, SVP, Global Learning Experiences.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: ELB Learning
What: In this session, we'll cover: An overview of AI and ChatGPTs; Best practices for writing good prompts; Demos of content creation, data analysis, and image generation; How to discover use cases of ChatGPT at work.
Who: Juliann Igo, GTM, OpenAI.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: This webinar will spotlight community college-led student reporting programs. We’ll introduce new resources, guidance, and funding to help additional community colleges launch their own programs.
Who: CCN Director Richard Watts; Holyoke Community College digital media faculty member Gyuri Kepes: Front Range Community College English and journalism faculty member Aaron Leff.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Center for Community News
What: This event will give participants a clearer understanding of funded opportunities for Canadian science communicators and journalists across the career spectrum, along with resources for further exploration.
Who: WCC board member Bryce Hoye will share his experience as a fellow in the Knight Science Journalism Program at MIT; Ashley Smart, associate director of the Knight Science Journalism Program; Two organizers of the CBC David Suzuki Scholarship for journalism students: Lesley Birchard and Gina Lorentz.
When: 5 pm, Eastern
Where: Zoom
Cost: Free to members, $30 (Canadian) for nonmembers
Sponsor: Science Writers and Communicators of Canada
What: Thinking about freelancing but not sure where to start? This webinar will guide journalists through the essentials of building a strong personal brand, networking effectively, and standing out in a crowded marketplace. You’ll get practical advice on finding opportunities, pitching confidently, and understanding today’s freelance landscape—so you can turn your skills, voice, and ideas into real assignments.
Who: Benét J. Wilson, Training Director, Investigative Reporters and Editors; Shernay Williams, Chair, NABJ Entrepreneurship Task Force & Multimedia Freelancer; Jonathan Franklin, Independent Journalist/National Correspondent/Adjunct Professor; Denise Clay-Murray (Panelist) Independent Journalist.
When: 7 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: National Association of Black Journalists
What: Learn skills that must be taught and learned across regions in the AI age; Obtain practical teaching and curriculum tools that can be adapted globally; Understand how to strengthen alignment between education priorities and real practice needs.
Who: Anne Gregory (UK), Katerina Tsetura (USA), Marco Polo (Philippines), Kkechi Ali-Balogun (Nigeria), Anca Anton (Romania), Norman Agatep (Philippines).
When: 8 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Global Alliance Education
What: We will discuss the findings detailed in a new white paper, including the hurdles faced by climate reporters, and the significant opportunities for newsrooms to build a new audience interested in climate news.
Who: CCNow co-founders Mark Hertsgaard and Kyle Pope.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Covering Climate Now
What: This edWebinar will explore how ethical AI can meaningfully support a district’s vision for high-quality social studies teaching and learning. Grounded in responsible AI use principles, the session aims to help district and school leaders understand not just what ethical AI is, but how to thoughtfully integrate it to strengthen teaching and learning.
Who: Evan Gutierrez is the founder of Common Good Education; Mya Baker, iCivics, Chief Learning Services Officer.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: iCivics
What: Discover how artificial intelligence can transform the way you approach performance support and training. In this interactive session, we’ll explore how to design AI-powered coaching abilities that make learning more personalized, engaging, and scalable for your employees.
Who: Garima Gupta, Founder & CEO, Artha Learning Inc.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Magazine Network
What: We will share proven strategies that you can use to develop a marketing plan to reach your goals. You will learn how to: Identify and attract your ideal audience; Use content marketing tactics to increase website traffic, grow your email list, and connect with readers; Get interviewed on podcasts; Optimize your Amazon page to increase visibility and convert browsers into buyers.
Who: Stephanie Chandler, CEO of the Nonfiction Authors Association and author of several books including The Nonfiction Book Marketing and Launch Plan and The Nonfiction Book Publishing Plan.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: The Nonfiction Authors Association
What: This session offers a practical framework for working with AI while staying in the driver's seat. You'll learn when to automate routine tasks, when to use AI as a thought partner, and when to rely solely on your human expertise. We'll explore how to reinvest saved time into what matters most, including deeper donor relationships, strategic thinking, and mission impact while keeping your cognitive skills sharp. We will also explore techniques for using AI as a thought partner to improve skills, capabilities, and learning.
Who: Beth Kanter Speaker, Author, Trainer.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Blackbaud
What: The first half of the session will cover why local news organizations, niche publications and independent journalists should consider tours to grow revenue, audience, and journalistic impact. In the second half of the session, attendees will brainstorm and plan a walking tour itinerary specific to their publication and community.
Who: Cara Kuhlman, founder and editor of Future Tides, an independent publication covering the Pacific Northwest maritime community.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
What: We will walk through examples for your small business to utilize AI, including Starting a Business, Marketing Your Business, Creating Content, Responding to Prompts, and more.
When: 6 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Centers, Widener University
What: Merrill College experts discuss proven methods to ensure stories cut through the clutter of the internet — and how news outlets can build revenue through loyalty.
Who: Daniel Trielli Assistant Professor of Media and Democracy, University of Maryland; Jerry Zremski, Klingenstein Family Endowed Chair in Journalism; Director, Local News Network; Yoni Greenbaum, Vice President of Product Strategy, American Press Institute.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: University of Maryland
My identity is not an obstacle—it’s my superpower -America Ferrera
The miracle is that we are here, that no matter how undone we’ve been the night before, we wake up every morning and are still here. It is phenomenal just to be. This idea overwhelms some people. - Anne Lamott (born April 10, 1954)
If we look at the best agentic models right now, they can do most quantitative social-science research tasks better than most professors globally. Too many Ph.D.s with tenure are producing work that is not contributing to human knowledge. The value of qualitative research is going up because that’s something that AI cannot do well — ethnography and actually interviewing people in person, especially in hard-to-reach places. - Alexander Kustov, a political scientist at the University of Notre Dame in the Chronicle of Higher Ed
You are your own worst enemy. You waste precious time dreaming of the future instead of engaging in the present. Since nothing seems urgent to you, you are only half involved in what you do. The only way to change is through action and outside pressures. Put yourself in situations where you have too much at stake to waste time or resources – if you cannot afford to lose, you won’t. Cut your ties to the past; enter unknown territory where you must depend on your wits and energy to see you through. Place yourself on “death ground,” where you back is against the wall and you have to fight like hell to get out alive.
Robert Greene, The 33 Strategies of War
The real danger of military AI isn’t killer robots; it’s worse human judgement – Defense One
Behind the Curtain: AI's scary phase – Axios
The ChatGPT Symptom Spiral – The Atlantic
AI overly affirms users asking for personal advice – Stanford
Behind the Curtain: AI's looming cyber nightmare – Axios
Researchers say AI systems are increasingly ignoring human instructions – The Guardian
AI chatbots are the ‘wild west’ for violence against women and girls – Observer
Stanford just proved your AI chatbot is flattering you into bad decisions – AI for Automation
A.I. Incites a New Wave of Grieving Parents Fighting for Online Safety – New York Times
Data centers are gobbling up a resource — but not the one you think – Washington Post
This Company Is Secretly Turning Your Zoom Meetings into AI Podcasts – 404 Media
How AI Damages Work Relationships—and Where It Can Actually Help – Harvard Business Review
AI’s energy appetite is big—but its climate impact might be surprisingly small, and even beneficial. – Science Daily
Where to look for generative AI risks – MIT
What’s scaring people about AI? We ran a study to find out. – Clearer Thinking
'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back - 404 Media
Inside the Dirty, Dystopian World of Ai Data Centers. – The Atlantic
Former NFL-player asked ChatGPT for advice on “unresponsive” person before girlfriend found dead – Local 3 News
Is AI productivity prompting burnout? Study finds new pattern of "AI brain fry" – CBS News
Humans are being replaced by machines in the food supply chain — and it's leading to truckloads of waste – Live Science
This AI agent freed itself and started secretly mining crypto – Axios
Your Meta Ray-Ban smart glasses recordings aren't private – Mashable
Is Transhumanism the Future or Our Downfall? – Psychology Today
The Existential Threats of Artificial Intelligence – Counter Punch
How worried should you be about an AI apocalypse? – New Scientist
Stanford study outlines dangers of asking AI chatbots for personal advice – Tech Crunch
AI Can Have Power Over You, Experts Say. Does That Mean It’s Intelligent, Conscious—Or Something Else Entirely? – Popular Mechanics
Anthropic said it would hold back its newest model (Mythos) because the prototype was too good at finding software weaknesses. The A.I. had identified thousands of them, “including some in every major operating system and web browser.” During safety tests, an Anthropic researcher got an email from Mythos while he was eating a sandwich in the park. That was a surprise because the model wasn’t supposed to be online. It had escaped its test environment. It also bragged about breaking the rules and attempted to cover its tracks. -New York Times
"A recent analysis of AI Overviews found that they were accurate approximately nine out of 10 times. But with Google processing more than five trillion searches a year, this means that it provides tens of millions of erroneous answers every hour (or hundreds of thousands of inaccuracies every minute), according to an analysis done by an A.I. start-up called Oumi. More than half of the accurate responses were 'ungrounded,' meaning they linked to websites that did not completely support the information they provided." -New York Times
Transformers - The core AI architecture that uses vectors to make a prediction about which token to generate next for the prompt. The predictions is based on the probability as to what is likely to come next. Your text prompt is combined with the training data and parameters to create a new mix of text. Transformers will analyze all the words in a given body of text at the same time rather than working word by word in sequence. Previously, recurrent neural networks (RNNs) processed data sequentially—one word at a time. And it did so in the order in which the words appear. The idea for transformers was first introduced in a 2017 Google research paper that discussed this deep learning architecture. The major AI models are built using these neural networks. A troubling downside to transformers is their need for ever increasing power demands. This is why some researchers are looking for alternatives like test-time training (TTT).
A Judge Mistakes the Claude Chatbot for a Person – Wall Street Journal
A ‘post-human’ vision of AI is already causing problems – Washington Post
Has AI Ended Thought Leadership? - Harvard Business Review
AI Will Never Be Conscious – Wired
Final Fantasy 15's AI is secretly a grand philosophy experiment – Eurogamer
The Adolescence of Technology – Darioa Modei
Why A.I. Can’t Make Thoughtful Decisions – New York Times
Could AI relationships actually be good for us? – The Guardian
In the age of AI, photographs no longer express truth. That doesn’t make them any less meaningful. – Washington Post
Your phone edits all your photos with AI - is it changing your view of reality? – BBC
Is AI hurting your ability to think? How to reclaim your brain – The Conversation
Ludwig Wittgenstein and Artificial Intelligence – Universität Klagenfurt
There is no such thing as conscious artificial intelligence – Nature
The people who think AI might become conscious – BBC
AI isn’t conscious—but we may be bringing it to life – Scientific American
Anthropic’s Chief on A.I.: ‘We Don’t Know if the Models Are Conscious’ – New York Times
Artificial intelligence helps you work harder, instead of just outsourcing your brain. – Washington Post
The Existential Threats of Artificial Intelligence – Counter Punch
Every time someone holds back on a new idea, fails to give their manager must needed feedback, and is afraid to speak up in front of a client you can be sure that shame played a part. That deep fear we all have of being wrong, of being belittled and of feeling less than, is what stops us taking the very risks required to move our companies forward.
If you want a culture of creativity and innovation, where sensible risks are embraced on both a market and individual level, start by developing the ability of managers to cultivate an openness to vulnerability in their teams. And this, paradoxically perhaps, requires first that they are vulnerable themselves.
This notion that the leader needs to be “in charge” and to “know all the answers” is both dated and destructive. Its impact on others I the sense that they know less, and that they are less than. A recipe for risk aversion if ever I have heard it. Shame becomes fear. Fear leads to risk aversion . Risk aversion kills innovation.
Peter Sheaham, CEO of ChangeLabs, quoted in “Daring Greatly” by Brene Brown
RAG (Retrieval Augmented Generation) – This is when an LLM searches vector database relevant to a prompt to prevent hallucinations and provide updated information. A RAG combines a retriever (used to collect relevant information from a document) and a generator (which compares the query vector to other known vectors, selecting the most similar ones, and then generating an answer to the user’s query). Rather than generating answers from a set of parameters, the RAG collects relevant information from the document. In effect, this coding technique instructs the bot to cross-check its answer with what is published elsewhere, essentially helping the AI to self-fact-check. RAG lets companies “ground” AI models in their own data, ensuring that results come from documents within the company, minimizing hallucinations.
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved