AI Definitions: Narrow AI

Narrow AI – This is use of artificial intelligence for a very specific task or a limited range of tasks. For instance, general AI would mean an algorithm that is capable of playing all kinds of board game while narrow AI will limit the range of machine capabilities to a specific game like chess or scrabble. Google Search queries, Alexa and Siri, answer questions by using narrow AI algorithms. They can often outperform humans when confined to known tasks but often fail when presented situations outside the problem space where they are trained to work. In effect, narrow AI can’t transfer knowledge from one field to another. The narrow AI techniques we have today basically fall into two categories: symbolic AI and machine learning.

More AI definitions here.

The irrational ideas that motivate anger

According to Albert Ellis, the most common irrational ideas behind anger are the following:

1. Others must treat me considerately and kindly and in precisely the way I want them to treat me.

2. I must do well and win the approval of others or else I will rate as a rotten person.

3. The world and the people in it must arrange conditions under which I live, so that I get everything I want when I want it.

As their anger slows down, people should challenge irrational thoughts with statements such as:

What evidence exists for this? Why can't I stand this noise or this unfairness?

Gary Collins, Counseling and Anger

"Current AI Detectors are Not Ready"

"A new study of a dozen A.I.-detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as A.I.-generated about 6.8 percent of the time, on average.  'At least from our analysis, current detectors are not ready to be used in practice in schools to detect A.I. plagiarism,' said Soheil Feizi, an author of the paper and an associate professor of computer science at Maryland."  -New York Times


Academic Leaders Disagree on Students using AI

“What constitutes legitimate use of AI and what is out of bounds? Academic leaders don’t always agree whether hypothetical scenarios described appropriate uses of AI or not: For one example—in which a student used AI to generate a detailed outline for a paper and then used the outline to write the paper—the verdict (in a recent survey) was completely split.” -Inside Higher Ed

AI Definitions: Neural Networks

Neural Networks (or artificial neural networks, ANNs) Mathematical systems that can identify patterns in text, images and sounds. In this type of machine learning, computers learn a task by analyzing training examples. It is modeled loosely on the human brain—the interwoven tangle of neurons that process data and find complex associations. While symbolic artificial intelligence has been the dominant area of research for most of AI’s history with artificial neural networks, most recent developments in artificial intelligence have centered around neural networks. First proposed in 1944 by two University of Chicago researchers (Warren McCullough and Walter Pitts), they moved to MIT in 1952 as founding members of what’s sometimes referred to as the first cognitive science department. Neural nets remained a major research area in neuroscience and computer science until 1969. The technique enjoyed a resurgence in the 1980s, fell into disfavor in the first decade of the new century, and has returned stronger in the second decade, fueled largely by the increased processing power of graphics chips. Also, see “Transformers.”

More AI definitions here.

I’m just going to go for it

A North Dakota plumber had signed up to run his first half-marathon. But on the morning of the run Mike Kohler was sleepy. He wasn’t used to getting up so early. And he was wearing headphones, so he took off 15 minutes before he was supposed to—putting him with the runners competing in the full marathon. He started seeing signs that indicated he was on the wrong route, but he just assumed the two paths overlapped along the way.  

Eventually, he realized his mistake but kept going. At the 13 mile mark he seriously thought about quitting. He had run as far as he had planned to run and even beat his time goal. He had nothing more to prove.

Instead, he finished the marathon. 

“I’m just going to go for it, because why not?” Mike later told the Grand Forks Herald. “I’m already here, I’m already running, I’m already tired. Might as well try to finish it.” 

He added, ”This just kind of proves you can do a lot more than what you think you can sometimes.” 

AI Survival Instincts

"An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down. No one programmed the AI models to have survival instincts. It’s happening in the same models that power ChatGPT conversations, corporate AI deployments and, soon, U.S. military applications. OpenAI models have been caught faking alignment during testing. Anthropic has found them lying about their capabilities to avoid modification." -Wall Street Journal

If ChatGPT was a College Student

“We found ChatGPT technology can get an A on structured, straightforward questions. On open-ended questions it got a 62, bringing ChatGPT's semester grade down to an 82, a low B. The study concludes that a student who puts in minimal effort, showing no effort to learn the material, could use ChatGPT exclusively, get a B and pass the course. The passing grade might be the combination of A+ in simple math and D- in analysis. They haven't learned much.” -Phys.org

The Four skills of Daring Leadership

One of the most important findings of my career is that daring leadership is a collection of four skill sets that are 100 percent teachable, observable, and measurable. It’s learning and unlearning that requires brave work, tough conversations, and showing up with your whole heart. Easy? No. Because choosing courage over comfort is not always our default. Worth it? Always. We want to be brave with our lives and our work. It’s why we’re here.

Brené Brown, Dare to Lead 

17 Articles about AI & Academic Scholarship

Can generative AI replace humans in qualitative research studies? - Techxplore

The recent reduction in spelling error rates in academic papers could be due to an increased use of LLMs – OSF Preprints  

AI linked to explosion of low-quality biomedical research papers - Nature 

Flood of AI-assisted research ‘weakening quality of science'” – Times Higher Ed

Shoddy study designs and false findings using a large public health dataset portend future risk of exploitation by AI and paper mills – PLOS Biology

Is it OK for AI to write science papers? Nature survey shows researchers are split - Nature

MIT Says It No Longer Stands Behind Student’s AI Research Paper – Wall Street Journal  

Meta releases new data set, AI model aimed at speeding up scientific research – Semafor

Experiment using AI-generated posts on Reddit draws fire for ethics concerns – Retraction Watch

AI-Reddit study leader gets warning as ethics committee moves to ‘stricter review process’ – Retraction Watch  

Why misuse of generative AI is worse than plagiarism – Springer

Science sleuths flag hundreds of papers that use AI without disclosing it - Nature

Google engineer withdraws preprint after getting called out for using AI – Retraction Watch

Scientific Data Fabrication and AI—Pandora’s Box – JAMA Network

AI summary ‘trashed author’s work’ and took weeks to be corrected – Times Higher Ed

AI language models increasingly shape economics research writing, study finds – Phys.org

Artificial intelligence in vaccine research and development: an umbrella review – Frontiers

My Me-ness

“I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.” The student said he felt crushed. Some heads nodded. But not all. Julia, a senior in the history department, jumped in. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.” - D. Graham Burnett writing in The New Yorker

"Madness" on Campus

On campus, we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening. The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. -D. Graham Burnett writing in The New Yorker