Throwing Good Money after Bad

Imagine a company that has already spent $50 million on a project. The project is now behind schedule and the forecasts of its ultimate returns are less favorable than at the initial planning stage. An additional investment of $60 million is required to give the project a chance. An alternative proposal is to invest the same amount in a new project that currently looks likely to bring higher returns. What will the company do? All too often a company afflicted by sunk costs drives into the blizzard, throwing good money after bad rather than accepting the humiliation of closing the account of a costly failure.

(This) fallacy keeps people for too long in poor jobs, unhappy marriages, and unpromising research projects. I have often observed young scientists struggling to salvage a doomed project when they would be better advised to drop it and start a new one. Fortunately, research suggests that at least in some contexts the fallacy can be overcome. (It) is taught as a mistake in both economics and business courses, apparently to good effect: there is evidence that graduate students in these fields are more willing than others to walk away from a failing project.

Daniel Kahneman, Thinking, Fast and Slow

AI-generated Ratings for Opinion Pieces

Some Los Angeles Times opinion pieces will now be published with an AI-generated rating of their political content, and an AI-generated list of alternative political views on that issue. The AI-generated tool “operates independently” from the paper’s human journalists, and “the AI content is not reviewed by journalists before it is published.” - The Guardian

Self-Awareness and Critical Thinking

At the root of our effectiveness is our ability to grasp the world around us and to take the measure of our own performance. We are constantly making judgments about what we know and don't know whether we're capable of handling a task or solving a problem. As we work at something, we keep an eye on ourselves, adjusting our thinking or actions as we progress.

Monitoring your own thinking is what psychologists call metacognition (meta is Greek for "about".) Learning to be accurate self-observers helps us stay out of blind alleys, make good decisions, and reflect on how we might do better next time. An important part of this skill is being sensitive to the ways we can delude ourselves. One problem with poor judgment is that we usually don't know when we've got it. Another problem is the sheer scope of the ways our judgment can be led astray.

To become more competent, or even expert, we must learn to recognize competence when we see it in others, become more accurate judges of what we ourselves know and don't know, adopt learning strategies that get results, and find objective ways to track our progress.

Peter C. Brown and Henry L. Roediger III, Make It Stick: The Science of Successful Learning

Spiritual Friendship

Spiritual friends aren’t looking to get ahead. This friend weeps with you in anxiety, rejoices with you in prosperity, seeks with you in doubts. Nothing is faked; everything is in the open. A relationship that grows into something holy, voluntary, and true is one of life’s greatest pleasures and a reward in itself. It’s a “wondrous consolation” to have someone in whom your spirit can rest, to whom you can simply pour out your soul. 

Karen Wright Marsh, Vintage Saints and Sinners

Setting the Standard

Excellent performers judge themselves differently than most people do. They're more specific, just as they are when they set goals and strategies. Average performers are content to tell themselves that they did great or poorly or okay.  

By contrast, the best performers judge themselves against a standard that's relevant for what they're trying to achieve. Sometimes they compare their performance with their own personal best; sometimes they compare it with the performance of competitors they're facing or expect to face; sometimes they compare it with the best known performance by anyone in the field.  

Any of those can make sense; the key, as in all deliberate practice, is to choose a comparison that stretches you just beyond your current limits. Research confirms what common sense tells us, that too high a standard is discouraging and not very instructive, while too low a standard produces no advancement.  

Geoff Colvin, Why Talent is Overrated  

How to Spot a Liar

When an international team of researchers asked some 2,300 people in 58 countries to respond to a single question — “How can you tell when people are lying?” — one sign stood out: In two-thirds of responses, people listed gaze aversion. A liar doesn’t look you in the eye. Twenty-eight percent reported that liars seemed nervous, a quarter reported incoherence, and another quarter that liars exhibited certain little giveaway motions.

It just so happens that the common wisdom is false.

Why do we think we know how liars behave? Liars should divert their eyes. They should feel ashamed and guilty and show the signs of discomfort that such feelings engender. And because they should, we think they do.

The desire for the world to be what it ought to be and not what it is permeates experimental psychology as much as writing, though. There’s experimental bias and the problem known in the field as “demand characteristics” — when researchers end up finding what they want to find by cuing participants to act a certain way. It’s also visible when psychologists choose to study one thing rather than another, dismiss evidence that doesn’t mesh with their worldview while embracing that which does.

Maria Konnikova writing in the New York Times

AI Definitions: Knowledge distillation

Knowledge distillation (KD) - A machine learning technique transferring the learnings of a large pre-trained model to a smaller model. The “student model” will mimic the predictions of the big one. The smaller one is more agile and efficient, able to make better real-time decisions. It is easier for the smaller model to include explainability in its structure. KD is used in deep learning, particularly for massive deep neural networks.

More AI definitions here.

Future You

Future You is a new interactive artificial-intelligence platform that allows users to create a virtual older self—a chatbot that looks like an aged version of the person, then personalized with information that the user puts in. The idea is that if people can see and talk to their older selves, they will be able to think about them more concretely, and make changes now that will help them achieve the future they hope for. -Wall Street Journal  

21 Articles about AI & Legal Issues

Musicians including Kate Bush and Billy Ocean released a “silent record” in outrage at a proposed change to British copyright law - New York Times

Google's AI previews erode the internet, US edtech company says in lawsuit - Reuters

Judge Throws Out Facial Recognition Evidence In Murder Case – Forbes  

AI 'hallucinations' in court papers spell trouble for lawyers - Reuters

Minnesota Grad Student Expelled for Allegedly Using AI Is Suing School – Gizmodo

Large Law Firm Sends Panicked Email as It Realizes Its Attorneys Have Been Using AI to Prepare Court Documents – Futurism 

Copyright Office Releases Second AI Report – JD Supra

News publishers sue Cohere for copyright and trademark infringement - Axios

Thomson Reuters scores early win in AI copyright battles in the US – Associated Press

Fake cases, judges’ headaches and new limits: Australian courts grapple with lawyers using AI – The Guardian

No. 42 law firm by head count could face sanctions over fake case citations generated by AI – ABA Journal

AI Bias Through the Lens of Antidiscrimination Law – Vanderbilt Law

Bias in Large Language Models—and Who Should Be Held Accountable – Stanford Law

AI’s Racial Bias Claims Tested in Court as US Regulations Lag – Bloomberg

Copyright Office Offers Assurances on AI Filmmaking Tools – Variety

AI making up cases can get lawyers fired, scandalized law firm warns - ArsTechnica

Alexi Says Its New AI Tool for Litigators Is Capable of Advanced Legal Reasoning – LawNext 

Nonprofit group joins Elon Musk’s effort to block OpenAI’s for-profit transition – Tech Crunch

The Growth of AI Law: Exploring Legal Challenges in Artificial Intelligence - National Law Review

AI’s Legal Storm: The Three Battles That Will Shape Its Future - Forbes

Guardian signs licensing deal with ChatGPT owner OpenAI – Press Gazette

An Arms Race of Research Misconduct

Retractions are rising in medical research literature, even as more eyes examine peer-reviewed papers for accuracy. AI is powering an arms race in the world of research misconduct, making it easier for scientific fraud to occur, and for editors to identify and root out. In 2002, 1 in 5,000 papers were retracted, Oransky said. In 2023 retractions increased to 1 in 500 papers. -AAPS news magazine

Self-Reflection is Not Enough

Arthur Chickering writes that helping “students deepen their understanding about reaching for authenticity and spiritual growth ... starts and ends with self-reflection and employs that throughout.”

To say the journey is all about self-reflection is too restrictive. The learning adventure may dip into navel-gazing occasionally, but discovering the insights of other people who have gone before will not happen quickly. A fruitful journey requires a shift away from the self toward a focus on something greater. A student must sort through the dirt to discover valuable nuggets of truth. This often happens through reading the great thinkers and then wrestling with the questions and ideas we discover. We stand on the shoulders of others to peer down the road a bit further than we could on our own. We can waste time and energy by insisting on clearing a path by ourselves.

The Chickering quote comes from his book "Encouraging Authenticity and Spirituality in Higher Education." Chickering understands the value of encouraging students to read great works. But his goal, he writes, is to help them evolve their “own" answers. However, good teaching goes beyond applauding young people just for coming to their own conclusions. Will the answers stand up to criticism? Can they effectively defend their positions? More importantly, can they live those answers?

Stephen Goforth

15 Articles about AI & Facial Recognition

Jobs on Creativity

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it; they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people.

Steve Jobs (born Feb, 24, 1955)

AI-generated Murder Stories

A so-called “true crime” YouTube channel has millions of views with AI-generated murder stories — none of which actually happened. There was no language on the channel’s homepage or in video descriptions to tell a viewer otherwise. If you looked at the comments on his videos, there were a lot of people who couldn’t tell they weren’t real. -404 Media