I believe
/I believe in Christianity as I believe that the Sun has risen, not only because I see it, but because by it I see everything else. -C.S. Lewis
I believe in Christianity as I believe that the Sun has risen, not only because I see it, but because by it I see everything else. -C.S. Lewis
Most current Academic Deep Search and Deep Research tools are workflow-based agents operating within predefined patterns—not flexible reasoning systems that analyze task structure and devise novel approaches. This doesn’t diminish their value. Academic Deep Search’s iterative retrieval with LLM-based relevance judgment is a genuine breakthrough. Deep Research’s ability to generate well-cited reports fills real needs. These tools ARE agents in the technical sense, and within their designed scope, they work impressively. But marketing language suggesting flexible reasoning, autonomous problem-solving, and human-like research assistance probably overstates current capabilities and can lead to misunderstanding by users who take the term “agent” or “research assistant” at face value. - Aaron Tay
These teenagers are already running their own AI companies – MSNBC
Explainable AI in Chat Interfaces – NN Group
The Eerie Parallels Between AI Mania and the Dot-Com Bubble – Wall Street Journal
Senators Investigate Role of A.I. Data Centers in Rising Electricity Costs – New York Times
An AI product’s position on the personality spectrum shapes how people engage with it – UX Design
The Good, Bad and Ugly of AI - Wall Street Journal
The Architects of AI: Person of the Year 2025 – TIME
Why AI's winners won't be decided by benchmarks – Axios
Behind the Deal That Took Disney From AI Skeptic to OpenAI Investor - Wall Street Journal
Something Ominous is Happening in the AI Economy – The Atlantic
‘Circularity’ is a flashing warning for the AI boom Wall Street’s buzzword for investors - Washington Post
The New York Times sued Perplexity, an A.I. start-up, claiming that Perplexity repeatedly used its copyrighted work without permission. - New York Times
A Prompt Engineering Framework for Large Language Model-Based Mental Health Chatbots – National Library of Medicine
ChatGPT started the AI race. Now its lead is looking shaky. - Washington Post
China's DeepSeek debuts two new AI models – Bloomberg
A growing share of America’s hottest AI startups have turned to open Chinese AI models - NBC News
Nvidia's massive investments are shaping the AI bubble debate – Axios
Gemini is most ‘empathetic’ AI model, test shows - Semafor
A.I.’s Anti-A.I. Marketing Strategy - New York Times
Tech Titans Amass Multimillion-Dollar War Chests to Fight AI Regulation - Wall Street Journal
Fears About A.I. Prompt Talks of Super PACs to Rein In the Industry - New York Times
The world says the more you take, the more you have. Christ says the more you give, the more you are. -Frederick Buechner
Machine learning has helped researchers at the John Hopkins School of Medicine create lab-grown ‘tiny brains’ and uncover how neurons may malfunction in schizophrenia and bipolar disorder. The organoids could be the beginning of an important testbed for psychiatric drug therapies. -SciTech Daily
Keep your face to the sunshine and you cannot see the shadows.
Small Language Models (SLMs) – Requiring less data and training time than large language models, SLMs have fewer parameters making them more useful on the spot or when using smaller devices. Perhaps the best advantage of SLMs is their ability to be fine-tuned for specialized for specific tasks or domains. They are also more useful for enhanced privacy and security and are less prone to undetected hallucinations. Google’s Gemma is an example.
Jesus changed careers at 30. It’s probably unrealistic to think you must have it all figured out by 25.
AI makes human journalists more important than ever - Harvard’s Nieman Lab
AI is changing the relationship between journalist and audience. There is much at stake – The Guardian
Google will look beyond volume journalism - Harvard’s Nieman Lab
Why The Washington Post launched an error-ridden AI product - Semafor
The AI widgets taking over news sites and extracting our data. – Columbia Journalism Review
5 predictions for AI’s growing role in the media in 2026 – Fast Company
News product teams are uniquely positioned to unlock AI value - Harvard’s Nieman Lab
Google is experimentally replacing news headlines with AI clickbait nonsense – The Verge
In 2026, AI will outwrite humans - Harvard’s Nieman Lab
Journalist Caught Publishing Fake Articles Generated by AI – Futurism
Politico management violated key AI adoption safeguards, arbitrator finds – Harvard’s Nieman Lab
Announcing our new AI partnership with Microsoft – Business Insider
Florida nonprofit news reporters ask board to investigate their editor’s AI use - Harvard’s Nieman Lab
What the iconic writers of New Journalism can teach us in the AI era – Poynter
How an AI-mediated world transforms news consumption. – Columbia Journalism Review
The importance of independent media in the age of AI slop and algorithms. – The Verge
Journalists may see AI as a threat to the industry, but they’re using it anyway - Harvard’s Nieman Lab
Investigating a Possible Scammer in Journalism’s AI Era – The Local
Mapping news creators and influencers in social and video networks - Reuters Institute
The Creator Journalism Trust and Credibility Toolkit: A guide for funders - The Lenfest Institute
10 ways I use AI to be a better journalist - Fast Company
How publishers can defend themselves against AI bots stealing journalistic content – The Fix
Shame is universal, but the messages and expectations that drive shame are organized by gender. These feminine and masculine norms are the foundation of shame triggers, and here's why: If women want to play by the rules, they need to be sweet, thin, and pretty, stay quiet, be perfect moms and wives, and not own their power. One move outside of these expectations and BAM! The shame web closes in. Men, on the other hand, need to stop feeling, start earning, put everything in their place, and climb their way to the top or die trying. Push open the lid of your box to grab a breath of air, or slide that curtain back a bit to see what's going on, and BAM! Shame cuts you down to size.
Brené Brown, Daring Greatly
Researchers have built a robot with an onboard computer, sensors, and a motor— and the whole assembly measures less than 1 millimeter — smaller than a grain of salt. -Washington Post
I have never thought of writing for reputation and honor. What I have in my heart must come out; that is the reason why I compose. -Ludwig van Beethoven, born Dec 17, 1770
“What is to stop someone from sitting in the back of a classroom and whispering into their glasses to say, ‘Hey, I need help with solving this problem,’” said Luke Hobson, an assistant director of instructional design at MIT. “Every time I see someone saying, ‘Blue books are the future,’ I’m like, ‘So are we going to ban students from wearing glasses?’” -Inside Higher Ed
Life is about change, whether good or bad, and being able to adjust accordingly. -Okechukwu Keke
You Can’t AI-Proof the Classroom, Experts Say. Get Creative Instead. – Inside Higher Ed
Teachers are using software to see if students used AI. What happens when it's wrong? – NPR
Professors are turning to this old-school method to stop AI use on exams – Washington Post
I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse – New York Times
OpenAI Is Giving Teachers Their Own ChatGPT, Free Through 2027 - Newsweek
How AI Is Changing Higher Education – Chronicle of Higher Ed
AI-generated lesson plans fall short on inspiring students and promoting critical thinking – The Conversation
Universities are embracing AI: will students get smarter or stop thinking? – Nature
Is AI dulling our minds? Experts weigh in on whether tech poses threat to critical thinking, pointing to cautionary tales in use of other cognitive labor tools – The Harvard Gazette
Are we teaching students AI competence or dependence? - London School of Economics
AI Has Joined the Faculty - Chronicle of Higher Ed
To adopt or to ban? Student perceptions and use of generative AI in higher education – Nature
What are the clues that ChatGPT wrote something? - Washington Post
Stop Pretending You Know How to Teach AI - Chronicle of Higher Ed
Their Professors Caught Them Cheating. They Used A.I. to Apologize. - New York Times
Teaching Students to Think Critically About AI – Harvard Graduate School of Education
AI-powered textbooks fail to make the grade in South Korea – Rest of World
More college students are using AI for class. Their professors aren't far behind – NPR
From Yale to MIT to UCLA: The AI policies of the nation's biggest colleges – Mashable
A researcher’s view on using AI to become a better writer – Hechinger Report
I Want My Students’ Effort, Not AI’s Shortcut to Perfect Writing – Edsurge
AI-resistant strategies - Chronicle of Higher Ed
What’s working, not on front lines of AI in classroom - The Harvard Gazette
AI Tutors Are Now Common in Early Reading Instruction. Do They Actually Work? – Edweek
Teaching: How to respond when students don’t want to work with AI - Chronicle of Higher Ed
In just the past several weeks, Google disclosed that hackers had used AI-powered malware in an active cyberattack, and Anthropic reported that its models had been used by Chinese state-backed actors to orchestrate a large-scale espionage operation with minimal human intervention. The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency. - Chuck Hagel writing in The Atlantic
Circularity – As AI companies invest in each other, money flows in a circular fashion, from one company to another and then back again. In effect, they prop up one another’s finances, in a similar fashion to what was known as “round-tripping” during the dot-com years. The result is an inflated performance without creating profits. The hope is that this will change over time, while larger concern is that demand for AI’s new products might never catch up with the capacity the industry is building.
I spend days at a time in bed, staring at the ceiling and thinking of all the things I could be doing but can’t because I know I would do them imperfectly. I lose countless hours to inner monologues filled with self-hatred and all-or-nothing thinking. I don’t read anything, instead preferring to slowly crush myself with the existential weight of knowing that I will never be able to read all the things.
For a very long time, I thought that I did this because I was lazy. I figured that if I just worked a little harder, tried a little more, then I would be able to accomplish the things I set out to do. Failing to do them was a failure of my character. It was because I was a bad person, or at least bad at being a person.
I told myself that I had to get my act together; I had to do all of these things so that I could prove I wasn’t the worthless piece of garbage I thought I was. When I inevitably cracked under that pressure, I took it as proof that I was a worthless piece of garbage.
If all of this sounds repetitive, that’s because it is. It’s a vicious, repetitive, monotonous cycle. It moves at breakneck speed, but also not at all. Experiencing it is the most damning case against perfectionism I have ever come across. Expecting perfection only leaves you with two options: do everything right on the very first try, or don’t even bother. Which is actually only one option, since 9 times out of 10, human beings don't do things right on the first try.
Jenni Berrett writing in Ravishly
Purdue University will begin requiring that all of its undergraduate students demonstrate basic competency in artificial intelligence starting with freshmen who enter the university in 2026. - Forbes
Large Language Models (LLMs) - AI trained on billions of language uses, images and other data. It can predict the next word or pixel in a pattern based on the user’s request. ChatGPT and Google Bard are LLMs. The kinds of text LLMs can parse out include grammar and language structure, word meaning and context (ex: The word green may mean a color when it is closely related to a word like “paint,” “art,” or “grass”), proper names (Microsoft, Bill Clinton, Shakira, Cincinnati), and emotions (indications of frustration, infatuation, positive or negative feelings, or types of humor).
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved