A Tiny Robot
/Researchers have built a robot with an onboard computer, sensors, and a motor— and the whole assembly measures less than 1 millimeter — smaller than a grain of salt. -Washington Post
Researchers have built a robot with an onboard computer, sensors, and a motor— and the whole assembly measures less than 1 millimeter — smaller than a grain of salt. -Washington Post
I have never thought of writing for reputation and honor. What I have in my heart must come out; that is the reason why I compose. -Ludwig van Beethoven, born Dec 17, 1770
Librarians Dumbfounded as People Keep Asking for Materials That Don’t Exist - Futurism
Self-reflection enhances large language models towards substantial academic response – Nature
Who Owns the Knowledge? Copyright, GenAI, and the Future of Academic Publishing – Arxiv
AI reviewers are here — we are not ready – Nature
More A than I: Testing for Large Language Model Plagiarism in Political Science – Political Science Now
Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’ – The Guardian
A new preprint server welcomes papers written and reviewed by AI – Science.org
Springer Nature retracts, removes nearly 40 publications that trained neural networks on ‘bonkers’ dataset – The Transmitter
Research Integrity in an Era of AI and Massive Amounts of Data – Sensible-med
UK funding body “opens up grant proposal data to explore using AI to smooth peer review” – Chemistry World
AI “Research” Papers Are Complete Slop, Experts Say – Futurism
AI use widespread in research offices, global survey finds – Research Professional News
The Royal Society journal Philosophical Transactions uses cover art that is AI generated – Neurodojo
Will AI Write the Next "Chapter" in Literature Reviews? – JMIR
Researchers say their AI system can “deliver rigorous and constructive feedback on scientific manuscripts- The Scientist
How generative AI could make scientific publishing fairer, and more competitive – Science Business
An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart. – Wall Street Journal
Scientific Writing in the Age of Artificial Intelligence – JAMA
Elsevier unveils AI-assisted research workspace to speed up R&D – Fierce Healthcare
A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On – 404 Media
Research integrity conference hit with AI-generated abstracts – Retraction Watch
More than 200 Korean papers retracted over AI use. - The Dong-A Ilbo
AI unreliable in identifying retracted research papers, says study – Retraction Watch
This science sleuth revealed a retraction crisis at Indian universities – Nature
Google Scholar Labs Uses AI to Transform Academic Research - TechBuzz
“What is to stop someone from sitting in the back of a classroom and whispering into their glasses to say, ‘Hey, I need help with solving this problem,’” said Luke Hobson, an assistant director of instructional design at MIT. “Every time I see someone saying, ‘Blue books are the future,’ I’m like, ‘So are we going to ban students from wearing glasses?’” -Inside Higher Ed
Life is about change, whether good or bad, and being able to adjust accordingly. -Okechukwu Keke
You Can’t AI-Proof the Classroom, Experts Say. Get Creative Instead. – Inside Higher Ed
Teachers are using software to see if students used AI. What happens when it's wrong? – NPR
Professors are turning to this old-school method to stop AI use on exams – Washington Post
I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse – New York Times
OpenAI Is Giving Teachers Their Own ChatGPT, Free Through 2027 - Newsweek
How AI Is Changing Higher Education – Chronicle of Higher Ed
AI-generated lesson plans fall short on inspiring students and promoting critical thinking – The Conversation
Universities are embracing AI: will students get smarter or stop thinking? – Nature
Is AI dulling our minds? Experts weigh in on whether tech poses threat to critical thinking, pointing to cautionary tales in use of other cognitive labor tools – The Harvard Gazette
Are we teaching students AI competence or dependence? - London School of Economics
AI Has Joined the Faculty - Chronicle of Higher Ed
To adopt or to ban? Student perceptions and use of generative AI in higher education – Nature
What are the clues that ChatGPT wrote something? - Washington Post
Stop Pretending You Know How to Teach AI - Chronicle of Higher Ed
Their Professors Caught Them Cheating. They Used A.I. to Apologize. - New York Times
Teaching Students to Think Critically About AI – Harvard Graduate School of Education
AI-powered textbooks fail to make the grade in South Korea – Rest of World
More college students are using AI for class. Their professors aren't far behind – NPR
From Yale to MIT to UCLA: The AI policies of the nation's biggest colleges – Mashable
A researcher’s view on using AI to become a better writer – Hechinger Report
I Want My Students’ Effort, Not AI’s Shortcut to Perfect Writing – Edsurge
AI-resistant strategies - Chronicle of Higher Ed
What’s working, not on front lines of AI in classroom - The Harvard Gazette
AI Tutors Are Now Common in Early Reading Instruction. Do They Actually Work? – Edweek
Teaching: How to respond when students don’t want to work with AI - Chronicle of Higher Ed
In just the past several weeks, Google disclosed that hackers had used AI-powered malware in an active cyberattack, and Anthropic reported that its models had been used by Chinese state-backed actors to orchestrate a large-scale espionage operation with minimal human intervention. The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency. - Chuck Hagel writing in The Atlantic
What: This session introduces grant professionals to advanced AI techniques for streamlining and enhancing the grant-seeking process, with a particular focus on using ChatGPT for funder research and analysis. Attendees will learn how to generate high-quality, targeted grant opportunities using strategic AI prompts, as well as how to assess and compare grant opportunities using AI - whether or not they have access to paid databases. Whether you're a solo grant writer or part of a larger team, you'll gain actionable tools to integrate AI more thoughtfully and effectively into your grant strategy. Krista
Who: Krista Kurlinkus, in partnership with The Association of Consultants to Nonprofits.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Nonprofit Learning Lab
What: We’ll discuss how reporters and news organizations are navigating – or failing to navigate – these threats and challenges while exercising their First Amendment rights. We will also address what reporters need to know in case they get arrested or attacked while covering rallies, raids or even taken to immigration facilities.
Who: Jeff Hermes, Deputy Director of the Media Resource Center.
When: 5 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: National Association of Hispanic Journalists
What: Whether you're brand new to AI or looking to sharpen your skills, this introductory session will provide a clear and practical foundation for using ChatGPT effectively.
Who: Juliann Igo, GTM, OpenAI
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: This webinar explores how academic libraries are integrating artificial intelligence into daily operations and services. Presenters will share practical, evidence-based examples of AI technology used in library environments with a focus on coding and practical everyday applications.
Who: April Sheppard, Associate Dean of the Library at Arkansas State University. Matthew Chase is an Instructional Services Librarian at Cuyamaca College. Danielle Hassan is the Head of Library IT Services at the University of Alabama at Birmingham. Aaron Pahl is an Assistant Professor and Digital Curation Librarian at the University of Alabama at Birmingham and has presented frequently on the topic of AI use in libraries.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Association of Southeastern Research Libraries
What: We will review how authors can prepare and deliver great interviews in the New Year. We will discuss the “Dos and Don’ts” of setting up an interview and arranging your media message in advance. In addition,
Who: Lindsey Gobel is a freelance publicist and communications professional with over 15 years of experience.
When: 10:30 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Author Learning Center
What: Our Disability Narrative Webinar Series initiative is designed to empower journalists, storytellers, and advocates with the tools to create accurate, inclusive and impactful narratives about disability.
Who: Peter Torres Fremlin, Disability Debrief.
When: 11:30 am, Eastern
Where: Zoom
Cost: Free to members
Sponsor: Military Veterans in Journalism
What: You will learn how to demonstrate the impact and ROI of soft skills programs. You will also learn how to embed practical, scalable tools that reinforce day-to-day behavior change - an essential ingredient in ensuring that soft-skills programs deliver real ROI. The webinar will provide useful, practical tools and will be presented in an easy-to-understand format. You will walk away with resources immediately applicable as you set out to demonstrate the impact and ROI of your soft skills programs.
Who: Patti Phillips, Ph.D. CEO, ROI Institute; Rob Toomey President and Co-Founder, TypeCoach.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Magazine Network
What: Hear from CMA members who advise comics journals, a fashion publication, a horror-fiction journal, a Black student newspaper and more. Learn how they work at those universities, and how there might be a great, undiscovered niche for a new publication at yours.
Who: Jessica Clary, Director of Student Media, VCU; Ben McNeely, Editorial Adviser, North Carolina State University; Kat Medina, Director of Student Media, SCAD.
When: 5 pm, Eastern
Where: Zoom
Cost: Free to members
Sponsor: College Media Association
What: Your newsroom has an ethics policy for granting confidentiality and anonymity to sources while balancing on-the-record credibility with doing no harm to the sources you cover. You would think nothing about that has changed when covering immigration, but new questions exist. Be aware of them and share some answers in this webinar.
Who: Lyle Muller is the professional adviser for Grinnell College’s student-run Scarlet & Black newspaper.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: iMediaCampus
What: We will cover the anatomy of an effective prompt, iterative refinement loops, and advanced techniques. Whether you’re exploring ChatGPT for professional, academic, or personal use, you’ll leave with practical tools and repeatable strategies to make ChatGPT more predictable, controllable, and valuable in real-world scenarios. This session is best suited for a beginner-intermediate audience.
Who: Lauren Oliphant, Solutions Engineer, OpenAI.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: Best practices for creating social video to engage millennial and Gen Z audiences with their local communities. While this research was developed within TV newsrooms, the recommendations here are valuable for any newsroom working to develop or refine their social strategy.
Who: Mike Beaudet, Northeastern University and others.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
What: Do you need an agent, and how do you get one if you do? How do contracts and royalties work? How is a book publicized and marketed successfully? Learn the answers to these and other questions from the experts.
Who: Jane Dystel, the president of Dystel, Goderich & Bourret Literary Management and a literary agent since 1986; Thomas Maier, award-winning former Newsday journalist and author of nine books, including two made into prime-time television shows; Lisa Pulitzer, a New York Times bestselling author and veteran ghostwriter who has authored, co-authored and/or ghostwritten more than 60 nonfiction books; Moderated by Press Club of Long Island board member Bill Bleyer, former Newsday reporter and author of seven regional history books
When: 7 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New England Newspaper & Press Association
Circularity – As AI companies invest in each other, money flows in a circular fashion, from one company to another and then back again. In effect, they prop up one another’s finances, in a similar fashion to what was known as “round-tripping” during the dot-com years. The result is an inflated performance without creating profits. The hope is that this will change over time, while larger concern is that demand for AI’s new products might never catch up with the capacity the industry is building.
I spend days at a time in bed, staring at the ceiling and thinking of all the things I could be doing but can’t because I know I would do them imperfectly. I lose countless hours to inner monologues filled with self-hatred and all-or-nothing thinking. I don’t read anything, instead preferring to slowly crush myself with the existential weight of knowing that I will never be able to read all the things.
For a very long time, I thought that I did this because I was lazy. I figured that if I just worked a little harder, tried a little more, then I would be able to accomplish the things I set out to do. Failing to do them was a failure of my character. It was because I was a bad person, or at least bad at being a person.
I told myself that I had to get my act together; I had to do all of these things so that I could prove I wasn’t the worthless piece of garbage I thought I was. When I inevitably cracked under that pressure, I took it as proof that I was a worthless piece of garbage.
If all of this sounds repetitive, that’s because it is. It’s a vicious, repetitive, monotonous cycle. It moves at breakneck speed, but also not at all. Experiencing it is the most damning case against perfectionism I have ever come across. Expecting perfection only leaves you with two options: do everything right on the very first try, or don’t even bother. Which is actually only one option, since 9 times out of 10, human beings don't do things right on the first try.
Jenni Berrett writing in Ravishly
Purdue University will begin requiring that all of its undergraduate students demonstrate basic competency in artificial intelligence starting with freshmen who enter the university in 2026. - Forbes
Large Language Models (LLMs) - AI trained on billions of language uses, images and other data. It can predict the next word or pixel in a pattern based on the user’s request. ChatGPT and Google Bard are LLMs. The kinds of text LLMs can parse out include grammar and language structure, word meaning and context (ex: The word green may mean a color when it is closely related to a word like “paint,” “art,” or “grass”), proper names (Microsoft, Bill Clinton, Shakira, Cincinnati), and emotions (indications of frustration, infatuation, positive or negative feelings, or types of humor).
There's a little bit of evidence that adults who are novelists or musicians, for example, tend to remember the imaginary friends they had when they were children. It's as if they are staying in touch with those childhood abilities in a way that most of us don't. Successful creative adults seem to combine the wide-ranging exploration and openness we see in children with the focus and discipline we see in adults.
Alison Gopnik, The Philosophical Baby
Hallucinations – When an AI provides responses that are inaccurate or not based on facts. Generative AI models are designed to generate data that is realistic or distributionally equivalent to the training data and yet different from the actual data used for training. This is why they are better at brainstorming than reflecting the real world and why they should not be treated as sources of truth or factual knowledge. Generative AI models can answer some questions correctly, but this is not what they are designed and trained to do. However, hallucinating AIs can be very useful to researchers, providing innovative insights that speed up the scientific process.
The line separating good and evil passes, not through states, nor between classes nor between political parties either, but right through every human heart. –Alexander Solzhenitsyn (born Dec. 11, 1918)
5 AI bots took our tough reading test. One was smartest — and it wasn’t ChatGPT. – Washington Post
If You Turn Down an AI’s Ability to Lie, It Starts Claiming It’s Conscious – Futurism
The People Outsourcing Their Thinking to AI – The Atlantic
What Is Agentic A.I., and Would You Trust It to Book a Flight? – New York Times
Staying Ahead of AI in Your Career – KD Nuggets
How to talk to grandma about ChatGPT - Axios
Research says being 'rude' to ChatGPT makes it more efficient — I ran a politeness test to find out – Tom’s Guide
SEO Is Dead: Welcome To GEO And Generative AI Search – Forbes
She used ChatGPT to win the Virginia lottery and then donated every dollar – Washington Post
The risks of giving ChatGPT more personality – Axios
The state of AI in 2025: Agents, innovation, and transformation – McKinsey
Poll shows a generational divide in how Americans use AI for work, creativity, and personal connection – Milwaukee Independent
AI for therapy? Some therapists are fine with it — and use it themselves. – Washington Post
These students, tech workers and artists just say no to AI - The Washington Post
5 Tips When Consulting ‘Dr.’ ChatGPT – New York Times
A Googler explains how to “meta prompt” for incredible Veo videos – Google
A Beginner’s Guide To Building AI Agents - Bernard Marr
6 AI mistakes you should avoid when using chatbots - The Washington Post
I’m an A.I. Developer. Here’s How I’m Raising My Son. - New York Times
Is it ok for politicians to use AI? Survey shows where the public draws the line – The Conversation
When should students begin learning about AI? – K-12 Dive
It’s become common for writers to mock AI’s stilted, wooden, and em-dash-heavy writing style. But with some gentle coaxing, AI is much better at writing than professional writers want to admit. In one 2025 study, three top AI models were pitted against MFA-trained writers. In initial tests, expert readers clearly preferred the human writing. But once researchers fine-tuned ChatGPT on an individual author’s full body of work, the results flipped. Suddenly, experts preferred the AI’s writing and often couldn’t tell whether it came from a human or a machine. – Derek Thompson
Sitting at the core of many generative AI tools, a foundation model is starting point of many machine learning models. These deep-learning neural networks are trained on massive datasets. In contrasts with traditional machine learning models, which typically perform specific tasks, foundation models are adaptable and able to perform a wide range of tasks. These models are sometimes called Large X Models or LXMs. A video explanation.
For some people, the less likely an explanation, the more likely they are to believe it. Take flat-Earth believers. Their claim rests on the idea that all the pilots, astronomers, geologists, physicists, and GPS engineers in the world are intentionally coordinating to mislead the public about the shape of the planet. From a prior odds perspective, the likelihood of a plot so enormous and intricate coming together out of all other conceivable possibilities is vanishingly small. But bizarrely, any demonstration of counterevidence, no matter how strong, just seems to cement their worldview further.
Liv Boeree writing in Vox
Google to launch its first AI-powered glasses next year – CNBC
How artificial intelligence can help achieve a clean energy future – MIT
The AI model that uses sounds like coughs & sniffles to predict early signs of disease – Mashable
AI unreliable in identifying retracted research papers, says study – Retraction Watch
What Investing in the Age of AI Will Look Like – Wall Street Journal
OpenAI looks to replace the drudgery of junior bankers’ workload - Bloomberg
AI and the Fountain of Youth - Wall Street Journal
Police are drowning in data. Could a chatbot help? – Washington Post
AI Is Going to Consume a Lot of Energy. It Can Also Help Us Consume Less. - Wall Street Journal
ChatGPT-powered dolls are becoming caregivers in South Korea – Semafor
AI-driven private schools are popping up around the U.S., from North Carolina to Florida – Axios
IBM and NASA made an open-source AI model for predicting solar weather – Engadget
Want to take better photos? Google thinks AI is the answer. - Washington Post
Determining whether accurate AI systems can apply that process to a different area – MIT
Air Force aiming to turbocharge wargaming with AI – Defense Scoop
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved