The Educated Mind
/It is the mark of an educated mind to be able to entertain a thought without accepting it. –Aristotle
It is the mark of an educated mind to be able to entertain a thought without accepting it. –Aristotle
AGI (Artificial General Intelligence) – A machine that has the capacity to understand or learn any intellectual task that a human being can. Rather than focusing on solving specific problems (like Deep Blue, which was good at chess), this type of AI has broader uses and may possess seemingly human-level intelligence to learn and adapt. Scientists have had difficulty defining human intelligence and disagree as to what would count as AGI. Regardless of where they draw the line, most experts say AGI is at least decades away. Scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Beyond AGI lies the more speculative goal of "sentient AI," where the programs become aware of their existence with feelings and desires.
More AI definitions here
(not necessarily verbalized)
a. Measure up (you’re climbing a ladder to get to ahead and when you get there it’s already been moved 3 rungs up)
b. Don’t let your guard down. People won’t like you.
c. You can’t trust a man until he’s 6 feet under
d. Sex is dirty. So save it for the one you love.
e. Good Christians don’t show negative emotions
You must let go of false messages from your childhood and carry your OWN cross. Not someone else’s.
What mottos have you had to battle and what effect have they had on your life?
David Seamonds
New AI battle: White House vs. Anthropic - Axis
The AI dilemma: To compete with China, the U.S. needs Chinese talent – Rest of World
China now leads the U.S. in this key part of the AI race – Washington Post
Just How Bad Would an AI Bubble Be? – The Atlantic
Public figures used to be off-limits in AI-generated video. A new Tow Center analysis shows how platforms are normalizing the practice. – Columbia Journalism Review
Morgan Stanley warns the AI boom may be running out of steam - Quartz
America is now one big bet on AI – The Financial Times
There Are Two Economies: A.I. and Everything Else – New York Times
AI is reshaping childhood in China – Rest of World
Google is blocking AI searches for Trump and dementia – The Verge
A.I. Is Driving a Stock Market Rally in China, Too - New York Times
Police are drowning in data. Could a chatbot help? – Washington Post
One Law Sets South Korea’s AI Policy—and One Weak Link Could Break It – ITIF
Chasing AGI could backfire for US, experts say – Semafor
How the Trump administration is using AI to ramp up immigration enforcement - CNN
Uncommon bonds: Cracking down on AI chatbots – Semafor
Inside Democrats' emerging AI playbook – Axios
Meta created its own super PAC to politically kneecap its AI rivals – The Verge
Albania appoints AI bot as minister to tackle corruption – Yahoo News
Regulators Are Digging Into A.I. Chatbots and Child Safety - New York Times
A short-term profit grab risks eroding America’s biggest advantage in the AI race. - Washington Post
Silicon Valley Launches Pro-AI PACs to Defend Industry in Midterm Elections – Wall Street Journal
Do AI Companies Actually Care About America? – The Atlantic
AI video app tops the download charts —horrifying many families of dead celebrities – Washington Post
I’m a Screenwriter. Is It All Right if I Use A.I.? – New York Times
CHATGPT resurrected my dead father – The Atlantic
A stunning scientific accomplishment: Computers can now design new viruses that can then be created in the lab - Washington Post
Hype and harm: Why we must ask harder questions about AI and its alignment with human values - Brookings
AI safety tool sparks student backlash after flagging art as porn, deleting emails - Washington Post
Turning “human in the loop” from a catchphrase into a design practice– Medium
People, not corporations, should set the rules that govern A.I. - New York Times
AI-generated medical data can sidestep usual ethics review, universities say – Nature
AI for Scientific Integrity: Detecting Ethical Breaches, Errors, and Misconduct in Manuscripts – Frontiers in AI
Chatbot Cheating in Ethics Class – Christianity Today
Politico’s recent AI experiments shouldn’t be subject to newsroom editorial standards, its editors testify – Nieman Lab
Does AI owe you for your small part in creating it? - Axios
Explainability in the age of large language models for healthcare – Nature
Responsible by Design – Why AI Must Be Human-First – Unite AI
Can You Choose an A.I. Model That Harms the Planet Less? - New York Times
ChatGPT isn’t great for the planet. Here’s how to use AI responsibly. - Washington Post
LLM-as-a-judge easily fooled by a single token, study finds – BD TechTalks
Ethical uses of generative AI in the practice of law – Reuters
Ethical Obligations to Inform Patients About Use of AI Tools – Stanford
The Ethical Problems With AI Sermons – Patheos
Can fake faces make AI training more ethical? – Science News
Education report calling for ethical AI use contains over 15 fake sources – ArsTechnica
Bringing AI to medicine requires philosophers, cognitive scientists, and ethicists – Stat News
Ethicists flirt with AI to review human research – Science.org
AI Supports Dishonesty in Humans, Making It Easier for Users to Cheat With an Accomplice – Discover
Even for those students committed to doing their own work, AI poses a threat that is quieter and harder to measure: that they will go off to college and find the experience of learning far more solitary, far lonelier, than ever before. That is the threat that AI increasingly poses to higher education today: not that it will steal our words, but that it will steal our ability to think and work together. - Chronicle of Higher Ed
Even though cartoons and skits over the last decade have made fun of exotic coffee drinks by suggesting it’s hard to just get a regular coffee these days, this has never happened. No one is being turned away from Starbucks for asking to buy a black coffee. So why is this scenario repeated as if regular coffee drinkers are being excluded? Jason Pargin explains:
This exaggeration is of a world that doesn’t exist. No one took his black coffee from him. All that happened is that the range of options for other people were expanded. He perceived that as persecution as if his choice was taken away. Most people are not satisfied to simply have the option to live the life the way they want. They also want to feel normal. They want to walk around and see that most other people have made the same choice that they have made. If they see that, over time, their preference has become less popular, and even worse, is seen as being base or unsophisticated, they will perceive the mere existence of those other options as a criticism of them, even if they’ve never heard anyone voice that criticism. There is basic psychological comfort in knowing that you are conforming to what the world wants and in the reassurance that that world is not going to change.
It’s not about the coffee. It’s the fear that if everybody else stops drinking coffee the way I drink it then I will become an outcast. That is scary to someone who is suddenly remembering how they have always treated outcasts.
Fraud, AI slop and huge profits: is science publishing broken? (a podcast) – The Guardian
AI-generated ‘participants’ can lead social science experiments astray, study finds – Science
Fake microscopy images generated by AI are indistinguishable from the real thing. – Chemistry World
Top A.I. Researchers Leave OpenAI, Google and Meta for New Start-Up to accelerate discoveries in physics, chemistry and other fields.- New York Times
Far more authors use AI to write science papers than admit it, publisher reports – Science
A stunning scientific accomplishment: Computers can now design new viruses that can then be created in the lab – Washington Post
The Machines Finding Life That Humans Can’t See – The Atlantic
AI models are using material from retracted scientific papers – MIT Tech Review
AI-generated scientific hypotheses lag human ones when put to the test - Science
AI for Scientific Integrity: Detecting Ethical Breaches, Errors, and Misconduct in Manuscripts - Frontiers
AI reveals unexpected new physics in dusty plasma - PhysOrg
AI will soon be able to audit all published research – what will that mean for public trust in science? – The Conversation
AI, peer review and the human activity of science – Nature
AI can’t learn from what researchers don’t share – Research Professional News
Researchers claim their AI ‘thinks’ like a human — after training on 160 psychology studies - Nature
Large language models to accelerate organic chemistry synthesis - Nature
AlphaGenome is an AI-powered platform aiming to predict how genetic code variants lead to different diseases – Stat News
AI, bounties and culture change, how scientists are taking on errors - Nature
Make all research data available for AI learning, scientists urge – Research Professional News
The rising danger of AI-generated images in nanomaterials science and what we can do about it - Nature
Suspense, in some form, is what keeps people watching anything longer than a TikTok clip, and it’s where A.I. flounders. A writer, uniquely, can juggle the big picture and the small one, shift between the 30,000-foot view and the three-foot view, build an emotional arc across multiple acts, plant premonitory details that pay off only much later and track what the audience knows against what the characters know. A recent study found that large language models simply couldn’t tell how suspenseful readers would find a piece of writing. -New York Times
Faith supplies staying power. It contains dynamic to keep one going when the going is hard. Anybody can keep going when the going is good, but some extra ingredient is needed to enable you to keep fighting when it seems that everything is against you.
You may counter, "But you don’t know my circumstances. I am in a different situation than anybody else and I am as far down as a human being can get.
In that case you are fortunate, for if you are as far down as you can get there is no further down you can go. There is only one direction you can take from this position, and that is up. So your situation is quite encouraging. However, I caution you not to take the attitude that you are in a situation in which nobody has ever been before. There is no such situation.
Practically speaking, there are only a few human stories and they have all been enacted previously. This is a fact that you must never forget – there are people who have overcome every conceivable difficult situation, even the one in which you now find yourself and which to you seems utterly hopeless. So did it seem to some others, but they found an out, a way up, a path over, a pass through.
Norman Vincent Peale, The Power of Positive Thinking
Harvard Medical School licenses consumer health content to Microsoft – Reuters
AI maps how a new antibiotic targets gut bacteria – MIT
AI can design toxic proteins. They’re escaping through biosecurity cracks. – Washington Post
Doctors develop AI stethoscope that can detect major heart conditions in 15 seconds – The Guardian
A stunning scientific accomplishment: Computers can now design new viruses that can then be created in the lab - Washington Post
The rising danger of AI-generated images in nanomaterials science and what we can do about it – Nature
Study looks at how biomedical journal editors-in-chief feel about AI use in their journals. - Springer
AI-generated medical data can sidestep usual ethics review, universities say - Nature
Study: Google's Gemma model downplays women's health needs compared to men's – Technology Magazine
Are AI Tools Making Doctors Worse at Their Jobs – New York Times
ChatGPT Convinced 37-Year-Old Psychologist His Sore Throat Was Fine; Biopsy Revealed Stage 4 Cancer – Mashable
AI designs antibiotics to fight drug-resistant superbugs – Semafor
Study: Some doctors lost skills after just a few months of using AI – Bloomberg
Using generative AI, researchers design compounds that can kill drug-resistant bacteria – MIT
Man develops rare condition after ChatGPT query over stopping eating salt – The Guardian
Ethical Obligations to Inform Patients About Use of AI Tools – Stanford Law
Study finds AI is better than experts at differentiating between human- and AI-written stroke papers - AHAIASA
Bringing AI to medicine requires philosophers, cognitive scientists, and ethicists – Stat News
How AI Is Transforming Kidney Care – MedScape
AI Reads Your Tongue Color to Reveal Hidden Diseases – Scientific American
A Chinese AI tool can manage chronic disease — could it revolutionize health care? – Nature
With therapy hard to get, people lean on AI for mental health. What are the risks? – NPR
A new AI model can forecast a person’s risk of diseases across their life - Economist
Failure is only a temporary change in direction to set you straight for your next success.
The real threats AI poses come not from AI itself but from the humans who wield it. As an extension of human intelligence, it is a reflection of our own selves. When AI produces hateful or violent outputs, it is not because it has malicious intent but because it has integrated human hatreds into its programming. If it generates destructive malware, it is because someone intentionally requested it. If it is misaligned with our goals, it is because we were not clear in our commands. For now, AI remains a tool, and we should focus on harnessing and constraining it effectively. -Eric Oliver writing in the Washington Post
You’re at the beginning of your life with the entire world in front of you. Whatever happened before reaching this point is done and unchangeable. What lies ahead is entirely up to you. Get the chip off your shoulder and walk on. Allow your past to be a source of strength and direction, not the thing that keeps you from moving on with your life.
Alex McDaniel
Bank of England warns of potential AI bubble - Semafor
Publishers with AI licensing deals have seven times the clickthrough rate – Press Gazette
Morgan Stanley warns the AI boom may be running out of steam – Quartz
Meta Will Begin Using AI Chatbot Conversations to Target Ads - WSJ
ChatGPT’s new parental controls failed my test in minutes - The Washington Post
Perplexity AI rolls out Comet browser for free worldwide – CNBC
OpenAI launches ChatGPT Pulse to proactively write you morning briefs- Tech Crunch
Google is blocking AI searches for Trump and dementia – The Verge
OpenAI Launches Video Generator App to Rival TikTok and YouTube – WSJ
Top A.I. Researchers Leave OpenAI, Google and Meta for New Start-Up to accelerate discoveries in physics, chemistry and other fields. – New York Times
OpenAI’s New Sora Video Generator to Require Copyright Holders to Opt Out - WSJ
‘All-of-the-above’ approach needed to power AI boom, Nvidia sustainability chief says - Semafor
Musk’s xAI accuses rival OpenAI of stealing trade secrets in lawsuit – Washington Post
Spending on AI Is at Epic Levels. Will It Ever Pay Off? – WSJ
Turning “human in the loop” from a catchphrase into a design practice – Medium
The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence – Smashing Magazine
Why Meta Thinks It Can Challenge Apple in Consumer AI Devices – WSJ
Record labels claim AI generator Suno illegally ripped their songs from YouTube – The Verge
Microsoft looks to build AI marketplace for publishers – Axios
Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions – Wired
OpenAI launches ChatGPT Pulse, a paid feature that generates personalized subject matter briefs for users overnight– Tech Crunch
Can’t figure out a complicated problem? Talk about it out loud or doodle on some paper. Psychologists in Spain say their tests show that processing information verbally or visually is more effective than remaining silent and still. They put students in separate rooms and gave them the same problems to solve. The students who talked to themselves or drew pictures to map out solutions finished first and scored higher. Psychologist Jose Luis Villegas Castellanos says he isn’t sure why it works this way, but believes verbal and visual problem-solving creates greater opportunities to discover the right answers.
Stephen Goforth
Anxiety is, in Kierkegaard’s words, the “dizziness of freedom”—the cost of doing the business of being fully alive. - Arthur C. Brooks
Fraud, AI slop and huge profits: is science publishing broken? (a podcast) – The Guardian
AI-generated ‘participants’ can lead social science experiments astray, study finds – Science
AI tools could reduce the appeal of predatory journals – Nature
Fake microscopy images generated by AI are indistinguishable from the real thing. – Chemistry World
The Machines Finding Life That Humans Can’t See – The Atlantic
Can researchers stop AI making up citations? - Nature
AI models are using material from retracted scientific papers – MIT Tech Review
AI tool detects LLM-generated text in research papers and peer reviews – Nature
Prestige over merit: An adapted audit of LLM bias in peer review – Cornell University arXiv
Far more authors use AI to write science papers than admit it, publisher reports – Science
What do researchers acknowledge ChatGPT for in their papers? – London School of Economics
The rising danger of AI-generated images in nanomaterials science and what we can do about it – Nature
ChatGPT Fails to Flag Retracted and Problematic Articles – The Scientist
Beyond ‘we used ChatGPT’: a new way to declare AI in research – Research Professional News
Study looks at how biomedical journal editors-in-chief feel about AI use in their journals. – Springer
AI-generated medical data can sidestep usual ethics review, universities say – Nature
AI could be used for a Research Excellence Framework, says Royal Society president – Research Professional News
Can Generative AI Restore Hope or Result in a Decline in the Quest for Academic Integrity – Sage
When AI rejects your grant proposal: algorithms are helping to make funding decisions - Nature
We risk a deluge of AI-written ‘science’ pushing corporate interests – here’s what to do about it – The Conversation
Far more authors use AI to write science papers than admit it, publisher reports – Science
Here’s a prompt pack covering competitive research, strategy, UX design, content creation, and data analysis from the makers of ChatGPT.
The shelves are packed with titles like The Science of Getting Rich and The 7 Habits of Highly Effective People. There is no section marked “Managing Your Professional Decline.” But some people have managed their declines well.
At some point, writing one more book will not add to my life satisfaction; it will merely stave off the end of my book-writing career. The canvas of my life will have another brushstroke that, if I am being forthright, others will barely notice, and will certainly not appreciate very much. The same will be true for most other markers of my success. What I need to do, in effect, is stop seeing my life as a canvas to fill, and start seeing it more as a block of marble to chip away at and shape something out of.
Arthur C. Brooks writing in The Atlantic
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved