AI Definitions: Abstractive Summarization

Abstractive Summarization (ABS) – A natural language processing summary technique that generates new sentences not found in the source material. In contrast, extractive summarization sticks to the original text, identifying the important sections to produce a subset of sentences taken from the original text. Abstractive summarization is better when the meaning of the text is more important than exactness, while extractive summarization is better when sticking to the original language is critical. 

More AI definitions

The E-Nose

Scientists have been developing and refining a technology called the e-nose—which is exactly what it sounds like. These systems detect and distinguish aromas, sometimes with about 1,000 times as much precision as humans can. Researchers are exploring—or even commercializing—e-nose systems that can scan a person’s breath to detect deadly infections, sniff the air in a building to seek out signs of potential contaminants, or even develop perfumes more quickly and cheaply than before. - Wall Street Journal

Orange Buttons are the Best

An appeal to authority is a false claim that something must be true because an authority on the subject believes it to be true. It is possible for an expert to be wrong, we need to understand their reasoning or research before we appeal to their findings. In a design meeting you might hear something like this:

“Amazon is a successful website. Amazon has orange buttons. So orange buttons are the best.”

Feel free to switch out ‘Amazon’ and ‘orange buttons’ for anything you want; you get an equally week argument.  

When we counter any logical fallacy, we want to do it as cleanly as possible. In the above example, we only need to point out that many successful websites don’t have orange buttons and many unsuccessful sites do have orange buttons. Then we can move away from the matter entirely unless there is some research or reason available to explain the authority’s decision.

Rob Sutcliffe writing in Prototypr

The intersection of Science & AI in 15 Articles

Unresolved Copyright & AI Questions


In March 2026, the Supreme Court declined to hear Thaler v. Perlmutter.

This leaves in place the D.C. Circuit’s March 2025 ruling.

There was no ruling on the merits. This doesn’t set precedent.

Stephen Thaler listed his AI system as the sole author, disclaimed any human creative contribution, and asked for copyright protection anyway. The DC court said no.

The D.C. Circuit held that copyright law requires a human author, which did not disqualify using AI assistance.

Questions not resolved:

1.         How much Human involvement is enough? 

The US Copyright Office said in its “Zarya of the Dawn” comic book registration decision that the AI- images weren’t protectable, but the human-authored text and the selection and arrangement of text and images were. So far, the Office has said that prompts are not copyrightable—prompting is more like giving instructions to a commissioned artist than actually determining the expressive content of the final image. But what if dozens, even hundreds of prompts are entered? Wouldn’t that involve substantial human effort, iterative refinement, and a creative vision? The Copyright Office says getting different results from the same prompt is proof the user isn’t controlling the expression. The underlying question is this: Is prompting closer to authorship or closer to curation?

2.         Can you prove what you Contributed?

If your work incorporates more than a de minimis amount of AI-generated material, the Copyright Office requires a disclosure statement about the AI involvement and a description of your human contribution. This means the creator must keep files, prompts, drafts, notes on what was intended and layered edits—in case there is a need to prove exactly what the human contribution was. A copyright applicant can avoid this simply by not disclosing the AI use. The system, in effect, rewards silence.  

3.         What Happens When Uncopyrightable AI Output Gets Licensed Anyway?

AI-generated materials are already being licensed, bundled, and sold. An example: Someone took a Python library and used an AI coding agent to rewrite it, then changed the project’s license to a more permissive one. The original creator objected, saying the original license still applied.

4.         AI Output Can Absolutely Infringe. So Now What?

The SCOTUS denial also prompted a wave of commentary suggesting that AI-generated works now exist in some kind of copyright-free zone. They don’t. Issues still on the table: Whether AI-generated summaries of news articles are substitutive enough to infringe, and whether AI-generated narrative retellings of novels cross the line from ideas to expression. One judge dismissed claims that AI bullet-point summaries of investigative journalism were substantially similar to the originals. The same judge allowed a lawsuit to proceed because ChatGPT’s summary of a novel was might have captured the “overall tone and feel” of the original work.

Bottom line: Millions of people are using AI tools every day without knowing whether what they’re making is protectable, infringing, both, or neither. 

Thaler Is Dead. Now for the AI Copyright Questions That Actually Matter 

Coding in the time of AI

You won't see the code yourself anymore, the robots will write it for you. Half the time, the code they write will be garbage, or nonsense. Slop. But it's so cheap to write that the computer can just throw it away and write some more, over and over, until it finally happens to work. Is it elegant? Who cares? It's cheap. Ten thousand times cheaper than paying you to write it, so we can afford to waste a lot of code along the way. If you were one of those crafters—the people who wrote idiomatic code that made that programming language sing—there's a real grief here. It's not as serious as when we know a human language is dying out, but it's not entirely dissimilar, either. -Anil Dash

20 Recent Articles about AI & Academic Scholarship

Research integrity is locked into an arms race with agentic AI slop – LSE  

AI can help with research, but humans must remain accountable say university executives – Times Higher Ed 

Hallucinated citations produced by generative artificial intelligence may constitute research misconduct when citations function as data in scholarly papers – Taylor & Francis

AI tool flags plagiarism in 95% of Ph.D. theses submitted this year at India university. – Times of India 

How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists

Hallucinated References: Five Excuses for Academic Misconduct – Dorethea Baur

Ministers urged not to allow data mining of academic literature – Research Professional News

Librarian finds ‘preposterous number’ of fake references in paper from Springer Nature journal – Retraction Watch 

AI is inventing academic articles – and scholars are citing them – the Observer  

DataSeer develops AI system to track dataset reuse – Research Information  

Journal Submissions Riddled With AI-Created Fake Citations – Inside Higher Ed

Account for AI in the environmental footprint of scientific publishing – Nature  

Will AI Help or Hinder Scientific Publishing? – Undark

Hey ChatGPT, write me a fictional paper: these LLMs are willing to commit academic fraud. – Nature

Scientists are failing to disclose their use of AI despite journal mandates, finds study – Physics World

AI in the editorial workflow: Journals set the rules, institutions set the habits – Scholarly Futures  

AI is turning research into a scientific monoculture - Nature

What happens when reviewers receive AI feedback in their reviews? – ArXiv

Human versus artificial intelligence: investigating ability of young academics from research and non-research institutions to identify ChatGPT-generated dental research abstracts - Nature 

Fear of stigma blamed as 0.1 per cent of papers declare AI use - Times Higher Ed

AI Definitions: Transhumanism

Transhumanism - A philosophical movement that advocates attempting to unlock human potential through artificial intelligence and science, with the goal of overcoming biological limitations and combating aging and illness to achieve immortality. This might be achieved through humans merging with machines or upload human consciousness into digital realms. In effect, transhumanism seeks to redefine what it means to be human. In 1957, Julian Huxley summarized the term as “man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.” Critics warn that this effort could erode the very qualities that define humanity, such as empathy, vulnerability and shared experience while exacerbating social inequalities.

More AI definitions

What are you willing to give up sleep for?

Our sleep habits both reveal and shape our loves. A decent indicator of what we love is that for which we willingly give up sleep.

My willingness to sacrifice sleep reveals less noble loves. I stay up late later than I should, drowsy, collapsed, on the couch, vaguely surfing the internet, watching cute puppy videos. Or I stay up trying to squeeze more activity into the day to pack it with as much productivity as possible. My disordered sleep reveals a disordered love, idols of entertainment or productivity.

My willingness to sacrifice much-needed rest and my prioritizing amusement or work over the basic needs of my body and the people around me reveal of that these good things—entertainment and work—have taken a place of ascendancy in my life.

Tish Warren, Liturgy of the Ordinary

25 Webinars this week about AI, Journalism & Media

Mon, Mar 23 - Wikipedia Edit-a-thon: Amplifying Women’s Voices on Financial Independence

What: Participants will edit existing Wikipedia entries and create new articles using a curated worklist of women who helped change laws, contributed new research, created new networks, and ultimately, bolstered economic independence for women. New editors are welcome and will receive an introduction to Wikipedia editing.

Who: Smithsonian curator Rachel Seidman; Ariel Cetrone of Wikimedia DC.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Smithsonian

More Info

 

Mon, Mar 23 - Social Media Marketing Strategy for Small Business

What: You’ll learn how to build a clear, sales-focused social media marketing strategy that actually converts. This is not a theory session. You will have created a practical, written plan you can immediately use in your business.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Small Business Development Center, Kutztown University

More Info

 

Tue, Mar 24 - Branding 101

What: Join us for a collaborative virtual workshop where we'll explore the key elements of effective branding: what you want to be known for, how you want customers to feel when they interact with your business, and how to create consistency across all touchpoints. We'll connect these pieces back to your business goals, so your brand becomes a tool for growth, not just decoration.  

Who: Jordan Hanna Gray, SBDC Advisor.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Virginia Small Business Development Center

More Info

 

Tue, Mar 24 - How journalism collaboratives can stay safe

What: Learn from experts about how to safely practice journalism and prepare for and respond to evolving safety challenges.

Who: Jeff Belzil is the International Women Media Foundation’s security director.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Collaborative Journalism Resource Hub, which is housed at the Center for Cooperative Media

More Info

 

Tue, Mar 24 - Why Real Journalists Are Better Than AI

What: We’ll discuss the growing presence of AI in news copy and why so many publications are turning to machines to do the work that was once done by people. We’ll look at what this has done for the quality of story production.  And we’ll discuss how journalists can stand out in a sea of AI slop, why human journalists are more important than ever, and how to educate your audience and leadership about journalists’ value over AI.

Who: Jonathan Maze, editor-in-chief of Restaurant Business at Informa Connect, and Greg Friese, MS, NRP, digital content strategy leader.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Society of Business Publication Editors

More Info

 

Tue, Mar 24 - How To Automate With AI

What: Walk through the basics of AI-powered automation using Make, with practical examples from my real ministry work. You’ll see how to use AI to handle tasks that take up far too much time. By the end of the session, you will have a clear, practical understanding of how automation works and the confidence to start building simple automations for your own ministry context.

Who: Rob Laughter who helps lead the creative team at The Summit Church in the Raleigh, NC.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: AI for Church Leaders

More Info

 

Wed, Mar 25 - Intellectual Property 101

What: We break down the IP framework -consisting of trademarks, patents, trade secrets and copyrights- that every founder needs to know. 

Who: Sima S. Kulkarni, Duane Morris.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Small Business Development Center at Temple University

More Info

 

Wed, Mar 25 - Der Spiegel Crossmedia: Wins, Misses, and Lessons Learned 

What: How Der Spiegel in Germany is reaching younger audiences. We'll have an honest conversation about what worked, what didn't, and what those experiments reveal about serving young audiences.

Who: Aleksandra Janevska, Deputy Lead of Crossmedia Unit, Der Spiegel.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: International News Media Association

More Info

 

Wed, Mar 25 - Creating value for a sustainable future

What: We will explore: Why community connection is a structural advantage driving trust, engagement and long-term viability; How uniquely local utility outperforms commoditized news, particularly in underserved communities; Why reader revenue is a signal as much as a funding source; What sustainable U.S. outlets consistently get right, regardless of model or market

Who: George Adelman, Director and Head of Partnerships, FT Strategies; Angilee Shah, CEO and Editor and Chief Charlottesville Tomorrow; Cheryl Phillips Founder, Big Local News at Stanford.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: FT Strategies

More Info

 

Wed, Mar 25 - Fun and Games with Copyright 

What: his workshop will introduce Copyright: the Card Game, a fun and interactive method of covering the basics of copyright and how they apply to faculty, students and the classroom. Participants will learn how the game was developed, and have the opportunity to play.

Who: Paul Bond of SUNY Broome Community College, one of the developers of the game and a librarian in the Southern Tier of New York.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Wed, Mar 25 - AI for Good: Secure, Smart, High-Impact AI for Nonprofits

What: This session will cut through the noise and provide a practical, responsible roadmap for using AI to expand impact while protecting data, reputation, and community relationships.

Who: Robert Friend, Fundraising Specialist at Eventgroove.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Nonprofit Tech for Good

More Info

 

Wed, Mar 25 - Scaling AI Agents: Breaking the Inference Memory Wall Across Compute, Storage and Networking

What: We examine how Supermicro's accelerated computing and all‑flash storage servers, combined with WEKA’s Augmented Memory Grid software, transform inference memory into a scalable, distributed resource.

Who: Allen Liu, Project Manager, Supermicro; Val Bercovici, Chief AI Officer, WEKA; Awanish Verma, Director, Product Management, AMD; Wendell Wenjen, Sr., Director of Marketing, Storage Solutions, Supermicro.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: TechTarget

More Info

 

Wed, Mar 25 - Crisis Communications: Who Is Telling Your Story?

What: This session explores the fundamentals of effective crisis communications for public safety and government agencies. Participants will learn how to prepare for high-stakes situations, manage messaging during rapidly evolving incidents, and communicate with transparency and professionalism when public attention is at its highest.

When: 1 pm, Eastern

Where: Zoom

Cost: $49

Sponsor: TOC Public Relations

More Info

 

Wed, Mar 25 - AI Impact Hour for Nonprofits

What: In this session, you’ll learn how to: Streamline communication and content creation; Organize information and reduce repetitive tasks; Support fundraising and outreach with beginner-friendly tools.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: TechSoup

More Info

 

Wed, Mar 25 - Teaching the Ethics of Advertising

What: We’ll explore an approach to advertising literacy education that takes an ethics- and systems-approach to analyzing digital ads.

Who: Michelle Ciccone, a PhD Candidate in the Department of Communication at the University of Massachusetts Amherst, and a former K-12 technology integration specialist; Cecilia Yuxi Zhou is an assistant professor in the Academy for Educational Development and Innovation at the Education University of Hong Kong.

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Thu, Mar 26 - Detecting AI-Generated Content – Updated Tools and Techniques

What: An updated version of a guide published by Global Investigative Journalism Network in 2025. We will introduce new resources, tools, and investigative methods that journalists can use to identify AI-generated images.

Who: Henk van Ess, a leading expert in open source intelligence and digital verification.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Global Investigative Journalism Network

More Info

 

Thu, Mar 26 - Restoring Trust in Science: Storytelling, AI, and Integrity in Scholarly Publishing

What: This webinar brings together leading voices to examine how trust can be rebuilt across scientific communication and the publication ecosystem. Our expert panelists will explore three critical challenges: Storytelling and public engagement; AI in peer review: Malfeasance and integrity.

Who: Michele Springer, Deputy Director of Medical Editing at Omnicom Health Medical Communications; Holden Thorp, Editor-in-Chief of Science; Ivan Oransky, MD, Co-founder of Retraction Watch and Executive Director, The Center For Scientific Integrity; Megan Ranney, Dean, Yale School of Public Health; Steve Smith, DPhil, Independent Consultant, STEM Knowledge Partners.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: International Society for Medical Publication Professionals

More Info

 

Thu/Fri, Mar 26/27 - SkillsFest26

What: Topics include: FOIAs, The First Amendment, Algorithms, Pitches, Reporting, Investigation, Ethics, Solutions Journalism, Rural communities, Headlines, Newsroom rights, AP Style, Immigration coverage, Conflicts of Interest, Backgrounding, Copyright, Misinformation, Resilient News teams, Covering Suicide, Design, Criminal justice, Grant Writing, Usiong AI.

Who: Professional journalists and experts.

When: Thursday, 1 pm, Eastern through Friday, 8:30 pm, Eastern.

Where: Zoom

Cost: Free

Sponsor: Society of Professional Journalists

More Info

 

Thu, Mar 26 - Trump and Higher Ed: The Latest

What: Audience Q&A

Who: Sarah Brown, The Chronicle’s news editor; Rick Seltzer, author of the Daily Briefing newsletter.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Chronicle of Higher Education

More Info

 

Thu, Mar 26 - An Intro to the Retraction Watch Research Accountability Reporting Fellowship

What: The application process, and a brief primer on how to cover issues of scientific integrity at your nearby institutions.

Who: Retraction Watch co-founder Ivan Oransky; Stephanie M. Lee, senior writer at The Chronicle of Higher Education.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsors: Retraction Watch & The Open Notebook

More Info

 

Thu, Mar 26 - The Future of Security-Focused AI

What: A practical session for IT leaders, chief data officers, and anyone responsible for safeguarding public‑sector data.  We’ll break down what modern cloud backup and recovery look like and how security‑focused AI is helping agencies stay ahead of threats and recover faster.

Who: Vishal Chaudhry, Chief Data Officer, Washington State Health Care Authority; Jennifer Franks,  Director, Center for Enhanced Cybersecurity, Government Accountability Office; Jeff Reichard, Vice President, Solution Strategy, Veeam.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: GovLoop

More Info

 

Thu, Mar 26 - Inside Nonprofit Local News: Careers, Pathways, and Possibilities

What: An inside look at how the field works, where it’s growing and the opportunities ahead.

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Journalism Project

More Info

 

Thu, Mar 26 -  Start an AI-Native Business: Informational Session

What: The start of an AI series where we take entrepreneurs through step by step on how to create an AI Native Business. In this session, we will run through the program information, talk about what makes an AI native business, how to construct and integrate AI into each area of your business.  

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Small Business Development Center, Widener University

More Info

 

Fri, Mar 27 - The Economics of news in 2026

What: This webinar aims to teach news leaders worldwide how to reinvent themselves to best serve the public. The panel offer their unique perspectives on how the news industry must evolve to thrive in the age of AI.

Who: Experts from the University of Maryland’s Philip Merrill College of Journalism and Robert H. Smith School of Business team up with industry leaders

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Robert H. Smith School of Business at the University of Maryland

More Info

 

Fri, Mar 27 - Copyright Law and Preservation, Conservation and Digitization of Film and Video

What: Our experts will unpack copyright issues affecting conservation, preservation and digitization. Specifically, the panel will review the status of the law and the status of best practices in libraries, archives and museums.

Who:  Jillian Borders , Head of Preservation at UCLA Film and Television Archive; Eric Harbeson, Scholarly Communications and Copyright Strategist for Authors Alliance.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Open Copyright Education Advisory Network (OCEAN)

More Info

Intrinsically lovable

No sooner do we believe that God loves us than there an impulse to believe that he does so, not because he is love, but because we are intrinsically lovable. But then, how magnificently we have repented (so) we next offer our own humility to God’s admiration. Surely, he’ll like that. If not that, our clear-sighted and humble recognition that we still lack humility. Thus, depth beneath depth and subtlety within subtlety, there remains some lingering idea of our own, our very own, attractiveness.

It is easy to acknowledge but almost impossible to realize for long, that we are mirrors whose brightness if we are bright, is wholly derived from the sun that shines upon us. Surely we must have a little – however little – native luminosity?

We want to be loved for our cleverness, beauty, generosity, fairness usefulness. The first hint that anyone is offering us the highest love of all is a terrible shock.

CS Lewis, The Four Loves

AI Literacy

AI literacy does not require waiting for a formal training program. A useful starting point is developing what researchers describe as output skepticism — the habit of asking, for any AI-generated result, whether the system could plausibly have reached that conclusion incorrectly and, if so, what the downstream consequences would be. Effective AI literacy is not about mastering the tool — it is about knowing where the tool ends and your own judgment begins. -JD Supra

Just Saying No isn't Easy

“The capacity of AI is so endless that it can be really hard to just say no and stop whatever the next improvement is that you want. As a perfectionist, that often can result in not knowing when to stop. The next best thing is possible, so, often, you end up spending more time writing the perfect workflow and telling AI what to do." - Jack Downey, Head of Strategy, Operations and Product at Webster Pass Consulting, quoted by CBS News

AI Definitions: Model Context Protocol (MCP)

Model Context Protocol (MCP) - This server-based open standard operates across platforms to facilitate communication between LLMs and tools like AI agents and apps. Developed by Anthropic and embraced by OpenAI, Google and Microsoft, MCP can make a developer's life easier by simplifying integration and maintenance of compliant data sources and tools, allowing them to focus on higher-level applications. In effect, MCP is an evolution of RAG. This allows an AI model to talk to Excel or PowerPoint, executing tasks autonomously.

More AI definitions

Humans — not AI — are to blame for deadly Iran school strike

Humans — not AI — are to blame for deadly Iran school strike, sources say. According to former military officials and people familiar with aspects of the bombing campaign in Iran, the thousands of people who gather intelligence and analyze satellite photos to build massive target lists ahead of potential conflicts with foreign adversaries are to blame for the deadly Iran school strike. The error was one that AI would not be likely to make: US officials failed to recognize subtle changes in satellite imagery, while human intelligence analysts missed publicly available information about a school located inside the Revolutionary Guard compound. -Semafor