A Dream
/A dream is not what you see in sleep. A dream is what does not let you sleep.
A dream is not what you see in sleep. A dream is what does not let you sleep.
A new tool to counter California’s housing crisis: AI - Semafor
Can AI Replace Your Financial Adviser? Not Yet. But Wait. - Wall Street Journal
AI models can analyze thousands of words at a time. A Google researcher has found a way to increase that by millions. – Business Insider
New deep learning AI tool helps ecologists monitor rare birds through their songs – Phys.org
When AI Denies Your Loan Application, Should You Be Able to Appeal to a Human? – Wall Street Journal
Edith Piaf AI-Generated Biopic in the Works at Warner Music – Variety
ChatGPT and Midjourney bring back the dead with generative AI – Axios
How advances in AI can make content moderation harder — and easier - Semafor
Can AI Rescue Recycling? - Wall Street Journal
The US has a new plan for wielding AI to fight climate change - Semafor
AI Doom Calculator is predicting people's death - USA Today
Jeff Bezos Bets on a Google Challenger Using AI to Try to Upend Internet Search - Wall Street Journal
The imperative person has very idealistic expectations. Only the best is acceptable. Frailties, common to our humaness, are despise. The result is a strong tendency to look up on anything less than ideal with disdain. That's why imperative people often admit, “I get irritated when other people make mistakes.” or “I tend to do an important job myself because someone might not do it right.” Or “I get impatient when other people can't understand what needs to be done.”
So, clutching onto our high ideals, we tend to hold ourselves above others. False superiority is felt. Condemnation is communicated. Annoyance is a constant companion. Relationships suffer. (All the while), the impaired person must cling to correctness.
Les Carter, Imperative People: Those Who Must Be in Control
The greatest use of life is to spend it for something that will outlast it. -William James (born: Jan. 11, 1842)
Employees want ChatGPT at work. Bosses worry they’ll spill secrets. – Washington Post
Panic and possibility: What workers learned about AI in 2023 – BBC
AI In The Workplace: Helpful Or Harmful? – JD Supra
How to use ChatGPT to make charts and tables – ZDnet
5 ChatGPT Prompts To Feel Invincible At Work – Forbes
Despite Office Bans, Some Workers Still Want to Use ChatGPT – Wall Street Journal
New Gen Z graduates are fluent in AI and ready to join the workforce – Washington Post
A Guide to Collaborating With ChatGPT for Work - Wall Street Journal
AI bots lack one critical skill for customer service jobs – Tech Target
10 most in-demand generative AI skills – CIO
The Do’s and Don’ts of Using Generative AI in the Workplace - Wall Street Journal
"Real isn't how you are made," said the Skin Horse. "It's a thing that happens to you. When a child loves you for a long, long time, not just to play with, but REALLY loves you, then you become Real."
“Does it hurt?" asked the Rabbit.
"Sometimes," said the Skin Horse, for he was always truthful. "When you are Real you don't mind being hurt."
"Does it happen all at once, like being wound up," he asked, "or bit by bit?"
"It doesn't happen all at once," said the Skin Horse. "You become. It takes a long time. That's why it doesn't happen often to people who break easily, or have sharp edges, or who have to be carefully kept. Generally, by the time you are Real, most of your hair has been loved off, and your eyes drop out and you get loose in the joints and very shabby. But these things don't matter at all, because once you are Real you can't be ugly, except to people who don't understand."
Margery Williams, The Velveteen Rabbit
I’m still wounded. I’ve learned there is no finish line for healing. But my wounds have meaning now — and for that, and for the people who have made it possible, I will be forever grateful. -Banning Lyon
There are countless credible accusations of (academic) misconduct that go uncorrected; I myself have published articles challenging the integrity of hundreds of papers. The majority of them have not been retracted, corrected or even remarked upon. I would wager that most reasonably large universities (my own included) have faculty members who are known to have plagiarized, fabricated, falsified, claimed undue credit, hidden financial conflicts of interest or misbehaved in numerous other ways and who have seemingly gone unpunished."
New York University professor Charles Seife writing in the New York Times
Curiosity is a muscle. The more you use it, the more it can do.
When it comes to using ChatGPT at work, some business leaders believe that soft skills will be crucial in the age of AI. Earlier this month, Aneesh Raman, a vice president at LinkedIn, said that communication, creativity, and flexibility are skills that will set employees apart in the workforce as opposed to technical skills like coding. Perhaps doubling down on what makes you human may be what saves you from being replaced by AI. -Aaron Mok
People who can't communicate think everything is an argument. And People who lack accountability think everything is an attack.
“Studies this year of ChatGPT in legal analysis and white-collar writing chores have found that the bot helps lower-performing people more than it does the most skilled. On a task that required reasoning based on evidence, however, ChatGPT was not helpful at all. Here, ChatGPT lulled employees into trusting it too much. Unaided humans had the correct answer 85 percent of the time. People who used ChatGPT without training scored just over 70 percent. Those who had been trained did even worse, getting the answer only 60 percent of the time. In interviews conducted after the experiment, “people told us they neglected to check because it’s so polished, it looks so right.’”
Read more in The New York Times
A large American health-care provider, Ochsner Health System, introduced a rule that workers must make eye contact and smile whenever they walk within ten feet of another person in the hospital. Pret A Manger sends in mystery shoppers to visit every outlet regularly to see if they are greeted with the requisite degree of joy. Pass the test and the entire staff gets a bonus—a powerful incentive for workers to turn themselves into happiness police. Companies have a right to ask their employees to be polite when they deal with members of the public. They do not have a right to try to regulate their workers’ psychological states and turn happiness into an instrument of corporate control.
Companies would be much better off forgetting wishy-washy goals like encouraging contentment. They should concentrate on eliminating specific annoyances, such as time-wasting meetings and pointless memos. Instead, they are likely to develop ever more sophisticated ways of measuring the emotional state of their employees. Academics are already busy creating smartphone apps that help people keep track of their moods, such as Track Your Happiness and Moodscope. It may not be long before human-resource departments start measuring workplace euphoria via apps, cameras and voice recorders.
Schumpeter in The Economist
Find what you are good at. Find what you have a passion for doing. People will pay you good money to do the things that fit within both circles. No one will be willing to pay for your "C minus" work (or not very much). So forget about bringing your "fours" up to "sixes" (on a scale of one to ten). Focus on getting your "eights "up to "nines" and your "nines" up to "tens." (A bit of an oversimplification but you get the idea).
Stephen Goforth
A new study “recruited management consultants from Boston Consulting Group.” One of the tasks was to brainstorm about a new type of shoe, sketch a persuasive business plan for making it and write about it persuasively. Some researchers had believed only humans could perform such creative tasks. They were wrong. The consultants who used ChatGPT produced work that independent evaluators rated about 40 percent better on average. In fact, people who simply cut and pasted ChatGPT’s output were rated more highly than colleagues who blended its work with their own thoughts. And the A.I.-assisted consultants were more than 20 percent faster.
Read more in The New York Times
AI’s big test: Making sense of $4 trillion in medical expenses - Politico
How to Use ChatGPT for Health: Doctors, Professionals Give Tips - Bloomberg
Medical AI Tools Can Make Dangerous Mistakes. Can the Government Help Prevent Them? - WSJ
UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges – Ars Technica
AI that reads brain scans shows promise for finding Alzheimer’s genes – Nature
New A.I. Tool Diagnoses Brain Tumors on the Operating Table – New York Times
Health data in the UK is about to flow more freely, like it or not (podcast) – The Guardian
Doctors Wrestle With A.I. in Patient Care, Citing Lax Oversight – New York Times
Researchers at Northwestern Medicine have created a generative AI system that can create text reports interpreting chest radiographs as accurately as radiologists. – Health IT Analytics
Where healthcare needs to focus for AI – Fast Company
How to Use ChatGPT for Cognitive Behavioral Therapy - MakeUseOf
Balancing The Pros And Cons Of AI In Healthcare – Forbes
Google reveals new generative AI models for healthcare – Health Care Dive
Eliminating Racial Bias in Health Care AI – Yale School of Medicine
The passion that lies within you must be discovered. -Laurie Calzada
It is certainly worthwhile getting another perspective from qualified friends about the decisions you face. Accepting advice is critical to raising the bar — as long as you continue to own your work and not allow others to take over.
Stephen Goforth
Some tech leaders fear AI. ScaleAI is selling it to the military. - Washington Post
Israel is using an AI system to find targets in Gaza. Experts say it's just the start - NPR
Pentagon's AI initiatives accelerate hard decisions on lethal autonomous weapons – Niagara Gazette
A.I. Killer Drones Are Becoming Reality. Nations Disagree on Limits - New York Times
Scale AI wants to be America’s AI arms dealer to compete with China - Washington Post
NGA is looking closer at how large language models and data labeling can further the progress of artificial intelligence across the military – Breaking Defense
Military AI’s Next Frontier: Your Work Computer - Wired
A.I. Brings the Robot Wingman to Aerial Combat - New York Times
CIA Builds Its Own Artificial Intelligence Tool in Rivalry With China - Bloomberg
Let’s Talk About AI on the Battlefield - Washington Post
Air Force Secretary: Military needs AI to augment human capabilities - Space News
U.S. not ready for era of robotic, AI world wars - Axios
The militarized AI risk that’s bigger than “killer robots” - Vox
Autonomous drones are rapidly changing combat—a new one aims to gain an edge with jet power and AI - Wired
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved