My identity
/My identity is not an obstacle—it’s my superpower -America Ferrera
My identity is not an obstacle—it’s my superpower -America Ferrera
The miracle is that we are here, that no matter how undone we’ve been the night before, we wake up every morning and are still here. It is phenomenal just to be. This idea overwhelms some people. - Anne Lamott (born April 10, 1954)
If we look at the best agentic models right now, they can do most quantitative social-science research tasks better than most professors globally. Too many Ph.D.s with tenure are producing work that is not contributing to human knowledge. The value of qualitative research is going up because that’s something that AI cannot do well — ethnography and actually interviewing people in person, especially in hard-to-reach places. - Alexander Kustov, a political scientist at the University of Notre Dame in the Chronicle of Higher Ed
You are your own worst enemy. You waste precious time dreaming of the future instead of engaging in the present. Since nothing seems urgent to you, you are only half involved in what you do. The only way to change is through action and outside pressures. Put yourself in situations where you have too much at stake to waste time or resources – if you cannot afford to lose, you won’t. Cut your ties to the past; enter unknown territory where you must depend on your wits and energy to see you through. Place yourself on “death ground,” where you back is against the wall and you have to fight like hell to get out alive.
Robert Greene, The 33 Strategies of War
The real danger of military AI isn’t killer robots; it’s worse human judgement – Defense One
Behind the Curtain: AI's scary phase – Axios
The ChatGPT Symptom Spiral – The Atlantic
AI overly affirms users asking for personal advice – Stanford
Behind the Curtain: AI's looming cyber nightmare – Axios
Researchers say AI systems are increasingly ignoring human instructions – The Guardian
AI chatbots are the ‘wild west’ for violence against women and girls – Observer
Stanford just proved your AI chatbot is flattering you into bad decisions – AI for Automation
A.I. Incites a New Wave of Grieving Parents Fighting for Online Safety – New York Times
Data centers are gobbling up a resource — but not the one you think – Washington Post
This Company Is Secretly Turning Your Zoom Meetings into AI Podcasts – 404 Media
How AI Damages Work Relationships—and Where It Can Actually Help – Harvard Business Review
AI’s energy appetite is big—but its climate impact might be surprisingly small, and even beneficial. – Science Daily
Where to look for generative AI risks – MIT
What’s scaring people about AI? We ran a study to find out. – Clearer Thinking
'AI Is African Intelligence': The Workers Who Train AI Are Fighting Back - 404 Media
Inside the Dirty, Dystopian World of Ai Data Centers. – The Atlantic
Former NFL-player asked ChatGPT for advice on “unresponsive” person before girlfriend found dead – Local 3 News
Is AI productivity prompting burnout? Study finds new pattern of "AI brain fry" – CBS News
Humans are being replaced by machines in the food supply chain — and it's leading to truckloads of waste – Live Science
This AI agent freed itself and started secretly mining crypto – Axios
Your Meta Ray-Ban smart glasses recordings aren't private – Mashable
Is Transhumanism the Future or Our Downfall? – Psychology Today
The Existential Threats of Artificial Intelligence – Counter Punch
How worried should you be about an AI apocalypse? – New Scientist
Stanford study outlines dangers of asking AI chatbots for personal advice – Tech Crunch
AI Can Have Power Over You, Experts Say. Does That Mean It’s Intelligent, Conscious—Or Something Else Entirely? – Popular Mechanics
Anthropic said it would hold back its newest model (Mythos) because the prototype was too good at finding software weaknesses. The A.I. had identified thousands of them, “including some in every major operating system and web browser.” During safety tests, an Anthropic researcher got an email from Mythos while he was eating a sandwich in the park. That was a surprise because the model wasn’t supposed to be online. It had escaped its test environment. It also bragged about breaking the rules and attempted to cover its tracks. -New York Times
"A recent analysis of AI Overviews found that they were accurate approximately nine out of 10 times. But with Google processing more than five trillion searches a year, this means that it provides tens of millions of erroneous answers every hour (or hundreds of thousands of inaccuracies every minute), according to an analysis done by an A.I. start-up called Oumi. More than half of the accurate responses were 'ungrounded,' meaning they linked to websites that did not completely support the information they provided." -New York Times
Transformers - The core AI architecture that uses vectors to make a prediction about which token to generate next for the prompt. The predictions is based on the probability as to what is likely to come next. Your text prompt is combined with the training data and parameters to create a new mix of text. Transformers will analyze all the words in a given body of text at the same time rather than working word by word in sequence. Previously, recurrent neural networks (RNNs) processed data sequentially—one word at a time. And it did so in the order in which the words appear. The idea for transformers was first introduced in a 2017 Google research paper that discussed this deep learning architecture. The major AI models are built using these neural networks. A troubling downside to transformers is their need for ever increasing power demands. This is why some researchers are looking for alternatives like test-time training (TTT).
A Judge Mistakes the Claude Chatbot for a Person – Wall Street Journal
A ‘post-human’ vision of AI is already causing problems – Washington Post
Has AI Ended Thought Leadership? - Harvard Business Review
AI Will Never Be Conscious – Wired
Final Fantasy 15's AI is secretly a grand philosophy experiment – Eurogamer
The Adolescence of Technology – Darioa Modei
Why A.I. Can’t Make Thoughtful Decisions – New York Times
Could AI relationships actually be good for us? – The Guardian
In the age of AI, photographs no longer express truth. That doesn’t make them any less meaningful. – Washington Post
Your phone edits all your photos with AI - is it changing your view of reality? – BBC
Is AI hurting your ability to think? How to reclaim your brain – The Conversation
Ludwig Wittgenstein and Artificial Intelligence – Universität Klagenfurt
There is no such thing as conscious artificial intelligence – Nature
The people who think AI might become conscious – BBC
AI isn’t conscious—but we may be bringing it to life – Scientific American
Anthropic’s Chief on A.I.: ‘We Don’t Know if the Models Are Conscious’ – New York Times
Artificial intelligence helps you work harder, instead of just outsourcing your brain. – Washington Post
The Existential Threats of Artificial Intelligence – Counter Punch
Every time someone holds back on a new idea, fails to give their manager must needed feedback, and is afraid to speak up in front of a client you can be sure that shame played a part. That deep fear we all have of being wrong, of being belittled and of feeling less than, is what stops us taking the very risks required to move our companies forward.
If you want a culture of creativity and innovation, where sensible risks are embraced on both a market and individual level, start by developing the ability of managers to cultivate an openness to vulnerability in their teams. And this, paradoxically perhaps, requires first that they are vulnerable themselves.
This notion that the leader needs to be “in charge” and to “know all the answers” is both dated and destructive. Its impact on others I the sense that they know less, and that they are less than. A recipe for risk aversion if ever I have heard it. Shame becomes fear. Fear leads to risk aversion . Risk aversion kills innovation.
Peter Sheaham, CEO of ChangeLabs, quoted in “Daring Greatly” by Brene Brown
RAG (Retrieval Augmented Generation) – This is when an LLM searches vector database relevant to a prompt to prevent hallucinations and provide updated information. A RAG combines a retriever (used to collect relevant information from a document) and a generator (which compares the query vector to other known vectors, selecting the most similar ones, and then generating an answer to the user’s query). Rather than generating answers from a set of parameters, the RAG collects relevant information from the document. In effect, this coding technique instructs the bot to cross-check its answer with what is published elsewhere, essentially helping the AI to self-fact-check. RAG lets companies “ground” AI models in their own data, ensuring that results come from documents within the company, minimizing hallucinations.
Several research studies have shown that people never get more done by blindly working more hours on everything that comes up. Instead, they get more done when they follow careful plans that measure and track key priorities and milestones. So if you want to be more successful and less stressed, don’t ask how to make something more efficient until you’ve first asked, “Do I need to do this at all?”
We complain we have so little time, and then we prioritize like time is infinite. So do your best to focus on what’s truly important, and not much else.
Is It Wrong to Write a Book with A.I.? – New Yorker
This Is How To Tell if Writing Was Made by AI (video) – Bloomberg
Artificial intelligence helps you work harder, instead of just outsourcing your brain. - Washington Post
New York Times Cuts Ties With Book Review Writer Over AI Use – The Wrap
Using AI makes writing more bland, study finds – NBC News
College students are writing with AI – but a pilot study finds they’re not simply letting it write for them – The Conversation
Wikipedia Bans AI-Generated Content – 404 Media
A Fortune editor has cranked out more than 600 stories using AI – Wall Street Journal
AI autocomplete doesn’t just change how you write. It changes how you think – Scientific American
A.I. Is Writing Fiction. Publishers Are Unprepared. – New York Times
AI tool flags plagiarism in 95% of Ph.D. theses submitted this year at India university. – Times of India
Horror Novel ‘Shy Girl’ Canceled Over Suspected A.I. Use - New York Times
Writing Faculty Push for the Right to Refuse AI – Inside Higher Ed
Grammarly pulls AI author-impersonation tool after backlash – BBC
In some classrooms, teachers ask: Can AI teach students to write better? – Washington Post
1 year, 1 publisher, 9,000 books: AI-generated titles flood Korean shelves – Korea Times
In some classrooms, teachers ask: Can AI teach students to write better? - Washington Post
Can we use AI for academic writing? It depends – Times Higher Ed
How AI slop is causing a crisis in computer science – Nature
A judge in New Zealand questioned the remorse of a defendant who had used A.I. to write apologies to victims and the court. - New York Times
Major conference catches illicit AI use — and rejects hundreds of papers - Nature
Senior European journalist suspended for publishing AI-generated quotes – EuroNews
Why artificial intelligence detectors could penalize academic writing - Nature
Pangram said three of my writers produced ‘AI-generated’ articles. That didn’t hold up. - Wall Street Journal
I’m a college admissions counselor. I’ve changed my mind about students using ChatGPT – San Francisco Chronicle
I wrote a novel using AI. Writers must accept artificial intelligence – but we are as valuable as ever – The Guardian
Don't Let AI Write the Story of Your Life – Psychology Today
Vector databases - The storage and search engine for vector embeddings. Language models use vectors (lists of numbers) with hundreds or even thousands of dimensions (characteristics of the data), allowing it to remember previous inputs, draw comparisons, identify relationships, and understand context. The vectors are grouped together if they relate to one another. For instance, the word "king" would relate to a man, while "queen" would relate to a woman. A deep learning model (typically through a transformer) will use these vectors to "understand" the meaning of words and their relationships. More than 1,000 numbers can be used to represent a single word. If there are many numbers, then the word vector has a high dimension, making it nuanced. A low dimension for a word vector means the list of numbers is low. While not as nuanced, a low-dimensional vector is easier to work with. Vector data bases is what allows a language model to “recall” previous inputs, draw comparisons, identify relationships, and understand context.
There is no way to quite describe the feeling that I got when I sat down to eat with my daughter at the school cafeteria for the first time. She looked up at me, and well, it was a look that said she completely adored me. That just blew me away. She couldn't hardly sit still or know what to do with her hands, as if she wanted to hug me. She had a searching look on her face, as if to say, "Who am I?" "Tell me who I am."
Fathers have a way of planting life mottos in their daughters' heads.
"Measure Up!" is one of the most often heard mottos. Perhaps it is never said out loud, but a daughter knows what's expected — and her attempts to live up to those expectations from her childhood can result in her running her life by guilt. She ends up serving a motto rather than fully becoming herself.
Executives from IBM, Microsoft and other companies say thinking of AI agents as analogous to human workers is hindering attempts to get full value from the technology. Agents sWall Street Journalhouldn’t have human names. They shouldn’t be on org charts. And they shouldn’t be given a specific job title. Leaders should tackle AI the same way they’ve tackled earlier waves of digital transformation and automation. - Wall Street Journal
Tokenization – The process of converting the raw training data (text, images, or audio) into small units called tokens. This takes place twice in an LLM: When it is being set up (pretrained). Raw training data (text, images, or audio) is converted into small units (tokens). It happens a second time (inference) when a user prompts the LLM and the prompt (whether text, images, or audio) is converted into smaller units (tokens).
May you live all the days of your life. -Jonathan Swift
What: We'll explore the pedagogical thinking behind these platforms, how they approach the challenge of balancing AI automation with human editorial judgment, and what responsible use might look like in 6–12 and higher education settings. Importantly, we'll also discuss the background and history of ITN, and broader questions educators should ask before recommending a platform to students. Come ready to think — not just about news and bias, but about the tools and organizations being built to navigate our news landscape today.
Who: Wesley Fryer, an educational technology “early adopter / innovator.”
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Media Education Lab
What: Join our panel of journalism and legal experts to discuss the challenges of covering police and U.S. Immigration and Customs Enforcement (ICE) activity in local neighborhoods. Among other topics, we will discuss safety concerns for journalists on the scene; the importance of building trust with affected communities; the First Amendment protections at play; and how to best fulfill the critical need for local reporting.
Who: Erica Moura, Simmons University; Sawyer Loftus, Bangor Daily News; Alexa Millinger, Hinckley Allen; Renee Griffin, Reporters Committee for Freedom of the Press.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New England First Amendment Coalition
What: This workshop will help you learn how to prioritize things and give you a clear formula to be successful on social media.
Who: Ray-Sidney Smith, Digital Marketing Strategist, Hootsuite Global Brand Ambassador.
When: 10 am, Eastern
Where: Zoom
Cost: $45
Sponsor: Small Business Development Center, Duquesne University
What: How can student journalists effectively, responsibly and legally pursue those stories? This virtual event is open to any student journalist or educator, whether you’re just getting started on this topic or already deep into your reporting.
Who: Julie K. Brown, whose dogged reporting for the Miami Herald helped bring much of the Jeffrey Epstein story to light.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Student Press Law Center
What: We will unpack our speaker’s latest work at the Reuters Institute at the University of Oxford, where he is exploring how foundation models and publishers are reshaping the information ecosystem in the era of generative AI. This conversation will move beyond headlines and deal announcements to examine power dynamics, long-term incentives, and the structural shifts underway.
Who: Madhav Chinappa, senior executive consultant and researcher at the Reuters Institute.
When: 10 am, Eastern
Where: Zoom
Cost: Free to members
Sponsor: International News Media Association
What: In this session, we'll cover: An overview of AI and ChatGPTs Best practices for writing good prompts Demos of content creation, data analysis, and image generation How to discover use cases of ChatGPT at work.
Who: Juliann Igo, GTM at OpenAI.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: In this session, we’ll move beyond the basics and explore more advanced ways to apply ChatGPT in your day-to-day work. You’ll see practical examples for improving productivity, supporting student engagement, and building efficient workflows using ChatGPT. We’ll also demonstrate additional features and real-world use cases that help teachers, staff, and administrators get more value from the platform in both classroom and operational settings.
Who: Kirk Gulezian, Education & Government, OpenAI.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: This session will combine reflections from Lina’s career with practical insights, encouraging participants to discover the strength of their own voices and use storytelling as a powerful tool for expression and change.
Who: Lina Rozbih, Senior Editor and Anchor at Voice of America, and an award-winning Afghan. journalist,
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Nobel Navigators
What: Join us to get a firsthand look at how Adobe Learning Manager brings AI to every stage of the learning journey - including personalized recommendations, deep semantic search, conversational AI Assistants, and AI‑driven coaching for role‑based practice.
Who: Justin Seeley, Learning Evangelist, Adobe.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Adobe Learning Manager
What: Good stories intrinsically have a structure our brains are looking for. With these 5 key questions, you can make sure you hit those key points on an idea you have, your work in progress, or a book you've already written.
Who: Jennifer Crosswhite, owner and CEO of Tandem Services.
When: 1:30 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Author Learning Center
What: Learn how to conduct deep research for report writing, organize your work with Projects, and build custom GPTs to automate tasks.
Who: Juliann Igo GTM, OpenAI
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Open AI Academy
What: A practical discussion on how agentic AI can strengthen your digital experience. We’ll break down what this emerging capability really means for government, how it can empower your teams, and how to introduce it responsibly and transparently.
Who: Kimberly Brandt, Deputy Administrator & Chief Operating Officer, Centers for Medicare & Medicaid Services; Kris Saling, Chief Technology Advisor, Manpower & Reserve Affairs, U.S. Army; Kelvin Brewer, Director, Public Sector Sales Engineering, Ping Identity; Andy MacIsaac, Senior Strategic Solutions Manager, Government & Education, Laserfiche; Luke Norris, Vice President, Platform Strategy & Digital Transformation, Granicus; Bryan Rosensteel Head of Public Sector Product Marketing, Wiz.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: GovLoop
What: Participants will learn how search engines work, the varieties of search engines, and how to craft advanced search queries to find exactly what they are looking for on the internet, discover news sources of information, and uncover information hiding in plain sight.
When: 6 pm, Eastern
Where: Zoom
Cost: Free to members
Sponsor: National Association of Hispanic Journalists
What: This webinar will focus on building AI literacy in academia and exploring how AI can be responsibly integrated into research, teaching, and institutional practices.
Who: Anjali Sam, Lead Product Manager, Cactus Communications; Vasundara BN Project Manager, Cactus Communications.
When: 6:30 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Cactus Communications
What: In this session you will learn about the state of AI adoption around the world and the concerns it brings as those are reflected across the industry. You will also learn about several threats, some of which were discovered and published only latterly. We will review the solutions that an organization can and should put in place to allow it to utilize the full power of Agentic AI while still protect its data and business.
Who: Dror Zelber, VP Product Marketing, Radwre.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Solutions Review
What: We delve into the core principles of accessibility, exploring real-world examples of disabilities and situational challenges users face. From understanding WCAG standards to addressing specific populations, we’ll equip you with actionable insights to create truly accessible websites.
Who: Jennie Martin, Front-End Development Manager, CPACC, DHS 508 Trusted Tester.
When: 1:00 pm
Where: Zoom
Cost: Free
Sponsor: Firespring
What: Join us for a beginner-friendly introduction to Codex: the AI system that powers code generation. We’ll walk through what Codex is, how people are using it in real workflows, and how it can help you move faster across everyday tasks.
Who: Ankur Kumar, Codex Deployment Engineer, OpenAI
When: 1:00 pm
Where: Zoom
Cost: Free
Sponsor: Open AI Academy
What: Topics will include: The opportunities and savings it offers; Ethical as well as practical concerns; Tips for safe and helpful usage; Red flags every author must be aware of.
Who: Book marketing advisor Beth Kallman Werner of Author Connections.
When: 1:30 pm
Where: Zoom
Cost: Free
Sponsor: Author Learning Center
What: Learn: How AI agents differ from other AI tools, and how they automate administrative tasks; What safeguards to put in place to protect institutional data and maintain trust; Which strategies allow you to integrate agents into existing workflows; How to support staff members who may be concerned about AI’s impact on their roles.
Who: Ian Wilhelm, Deputy Managing Editor, The Chronicle of Higher Education; Phil Ventimiglia, Chief Innovation Officer; Georgia State University; David Weil, Senior Vice President for Strategic Services and Initiatives Ithaca College.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Chronicle of Higher Ed
What: We’ll begin with a briefing on emerging state-level policy initiatives, including small business advertising tax credits, government advertising set-asides, journalism fellowship programs and employment incentives, and highlight where momentum is building cross the country. The session will also cover effective ways to engage policymakers.
Who: Matt Pearce, Director of Policy for Rebuild Local News; Susan Patterson Plank, Director of Government Affairs and Partnerships for Rebuild Local News.
When: 3:30 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
What: By attending this class, you will learn: The breadth of consumer reporting; How to identify and evaluate potential stories; The process of verifying claims to build strong, accurate reports.
Who: Sarah Guernelli, WPRI 12.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New England First Amendment Coalition
What: Join a panel of experts to examine the possibilities that enable remote access, discuss the distinctions between identifiable audiences and the public, and the potential of virtual screening/access/reading rooms.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Open Copyright Education Advisory Network (OCEAN)
"This is love: Not that we loved God. It is that he loved us and sent his Son to give his life to pay for our sins." 1 John 4:10
“In this is love..” or another translation could be “In this way is seen the true love."
God didn’t look down and say, “Boy, I see you love me. I think I’ll love you.” Or “You’re a nice guy, I really like that.”
Instead:
You were rebellious, arrogant, self-centered. God said, “I love you.”
You ignored him, fought him, were bored with him. God said, “I love you.”
You spit in his face, yelled at him, shook your fist. God said, “I love you.”
That’s what John means here.
We see what real love is by looking at what God did. He loved us with a desire to restore us, to make us whole.
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved