Close a few doors

We can spend a lot of energy keeping doors open. Time is wasted when we refuse to let go because of the pain of watching a door close and seeing an option disappear. We pay a price to avoid feeling a particular emotion such as losing an opportunity. Find ways to avoid overbooking our lives by letting a few things fall off our plates. Cancel projects. Give away ideas to colleagues. Resign from committees. Rethink hobbies. Let a few doors close.

Stephen Goforth

A way to spot genuine love

We don’t keep pets around very long when they protest or fight back against us. The only school to which we send our pets for the development of their minds or spirits is obedience school. Yet it is possible for us to desire that other humans develop a “will of their own;” indeed, it is this desire for the differentiation for the other that is one of the characteristics of genuine love.

In our relationship with pets we seek to foster their dependency. We do not want them to grow up and leave home. We want them to stay put, to lie dependably near the hearth. It is their attachment to us rather than their independence from us that we value in our pets.

This matter of the “love” of pets is of immense import because many, many people are capable of “loving” only pets and incapable of genuinely loving other human beings.

M Scott Peck, The Road Less Traveled

24 articles worth reading about the dangers of AI (beyond security issues)

Assessing the existential risk of AI - MIT Tech Review  

Intelligence analysts confront the reality of deepfakes: AI-generated image of fake Pentagon explosion just an inkling of what’s to come - Space News

The potential dangers of using artificial intelligence as a weapon of war - NPR  

India’s religious AI chatbots are speaking in the voice of god - Rest of World

AI-generated child sex images spawn new nightmare for the web - Washington Post 

Princeton computer science professor says don't panic over 'bullshit generator' ChatGPT - Business Insider  

Developers Created AI to Generate Police Sketches. Experts Are Horrified - Vice

Calm Down. There is No Conscious A.I. - Gizmodo

AI can be racist, sexist and creepy. What should we do about it? - CNN

The case for slowing down AI - Vox 

More than 1,000 tech leaders & researchers call for a six-month moratorium on AI development over “risks to society and humanity.” - New York Times

Claudia offers nude photos for pay but is a fake AI creation - Washington Post

If Wikipedia content is AI-generated, (it could create) a feedback loop of potentially biased information, if left unchecked - Vice   

2024 promises to be the first AI election cycle with artificial intelligence potentially playing a pivotal role at the ballot box - USA TODAY 

Why Hollywood Really Fears Generative AI - Wired

How AI is already changing the 2024 election - Axios

Chatbots have faced criticism for messing up key historical facts, fabricating sources, and citing misinformation about each other - Columbia Journalism Review

OpenAI CEO Sam Altman Says Government Intervention Is 'Crucial' - Entrepreneur

The internet is filled with videos promising AI can make you rich. But there is little evidence to prove it can - Washington Post

What happens when AI becomes so integrated into our daily decision-making that we become dependent on it? - Inside Higher Ed

ChatGPT Is Cutting Non-English Languages Out of the AI Revolution—threatening to amplify existing bias in global commerce and innovation - Wired

Will AI replace coders? - The Guardian

U.S. Grapples With Potential Threats From Chinese AI - Wall Street Journal

Researchers failed to identify one-third of medical journal abstracts as written by AI - Bioxiv

Why AI Will Make Our Children More Lonely - Wall Street Journal 

Love hurts—really!

Heartache can have the same effect as someone spilling hot coffee on us.

Imaging scans show the same parts of the brain light up for physical pain as when you are separated from a loved one or have a broken heart, say researchers at the University of Michigan. They asked 40 people who had a recent unwanted romantic breakup that gave them feelings of rejection to look at a photo of their former partner to think about the relationship. The brain scans taken during this and other similar situations were compared to scans when subjects were given slight pain. The similarities in the brain scans suggest a close connection between our minds and our bodies. The painful emotions that come with feeling socially rejected can scar us in more than one way. The sting of heartbreak and rejection can make us physically ill. Our social well-being is a critical part of maintaining a healthy life.

Is there someone you’ve cast aside with a harsh word or a loved one who has had to endure a negative attitude from you? Those actions are not that far removed from physically hurting that person.

Details of the study are in the journal Proceedings of the National Academy of Sciences.

Stephen Goforth

16 Journalism & AI quotes & tools

Beginner’s prompt handbook: ChatGPT for local news publishers - Joe Amditis

How to cover AI – a guide for journalists - The Fix 

Good journalism, in my view, is original and reveals previously unknown or hidden truths. Language models work by predicting the most likely next word in a sequence, based on existing text they’ve been trained on. So they cannot ultimately produce or uncover anything truly new or unexpected in their current form. Harvard’s Nieman Lab 

Machine learning can be deployed to help newsrooms identify and address biases that crop up in their own reporting, across text, photo, video, audio, and social media. The Fix 

A close examination of the work produced by CNET's AI makes it seem less like a sophisticated text generator and more like an automated plagiarism machine, casually pumping out pilfered work that would get a human journalist fired. Futurism 

It matters that the technology can fool regular people into believing there is intelligence or sentience behind it, and we should be writing about the risks and guardrails being built in that context. Harvard’s Nieman Lab 

Non-writing AI tools every journalist should know about. International Center for Journalists 

The "world's first" entirely AI-generated news site is here. It's called NewsGPT, and it seems like an absolutely horrible idea. Futurism

Artificial intelligence tools are now being used to populate so-called content farms, referring to low-quality websites around the world that churn out vast amounts of clickbait articles to optimize advertising revenue, NewsGuard found. NewsGuard

The Artifact news app lets AI rewrite a headline for you if you come across (a clickbait) article. TechCrunch

One area where MidJourney is helpful is food journalism. Need an image of a breakfast bowl with whole grain and blueberries? Just write a prompt. MidJourney is also excellent building basic templates for object cutaway diagrams. Mike Reilley’s Journalism Toolbox

With tools like ChatGPT in the hands of practically anybody with an internet connection, we're likely to see a lot more journalists having their names attached to completely made-up sources, a troubling side-effect of tech that has an unnerving tendency to falsify sourcing. Futurism

What if an AI could attend, take notes and write short, hallucination-free stories about public meetings? Harvard’s Nieman Lab

Can you design an AI system that attends a city meeting and generates a story? Yeah, I did it. This tech could soon — very soon — be a viable tool to save reporters time by covering hours-long public meetings. The technology could also lead to layoffs in some newsrooms. Harvard’s Nieman Lab

The publisher of Sports Illustrated and other outlets is using artificial intelligence to help produce articles and pitch journalists potential topics to follow. Wall Street Journal 

The owners of Sports Illustrated and Men’s Journal promised to be virtuous with AI. Then they bungled their very first ai story — and issued huge corrections when we caught them. Futurism

Hearty Laughter

A good belly laugh has a rallying effect that no chuckle can match. A British study in 2011 showed that, like sex and exercise, the physical effort of uncontrollable laughter makes our brains release chemicals called endorphins, which relax us and relieve pain. It is “the emptying of the lungs that causes” the feel-good effect, not just the thought of something funny, evolutionary psychologist Robin Dunbar tells BBCNews.com.

He and his colleagues at Oxford University asked volunteers to watch either a comedy or a documentary, and then applied painful levels of cold or pressure to their arms. The volunteers who had laughed hard during their videos could withstand 10 percent more pain than those who’d only giggled or who hadn’t been amused at all.

The Week Magazine

10 Free Media Webinars in the next 10 Days: social media, AI, journalism, media law, photography & more

Tue, June 20 – Social Media 102

What: Learn a few advanced social media tips and tricks, elevate your social media presence through micro strategies and activate your advocates. Join us to learn how to: Use social media to connect with constituents. Monitor conversations to stay ahead of the curve. Get people to advocate on your behalf. Navigate social media advertising and understand when to use it.

Who: Firespring Director of Nonprofit Solutions Kiersten Hill

When: 2 pm, Central

Where: Zoom

Cost: Free

Sponsor: Firespring

More Info

 

Tue, June 20 - AI research: An anthropological lens

What: This session will offer several discussion points to comprehend the gains of an anthropological perspective in unpacking AI in educational environments.

Who: Dr Nimmi Rangaswamy, professor at the Kohli Centre on Intelligent Systems, Indian Institute of Information Technology, IIIT, Hyderabad

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Tue, June 20 - Online workshop for local journalists and Muslim community group

What: The workshop is designed to help both local journalists and Muslim organisations to share and learn about best practice when it comes to reporting on stories involving Muslims and Islam. It will facilitate discussions between local journalists from across the UK with local Muslim community groups to explore better ways of working together to ensure balanced and fair reporting in the local media.

Who: Nadia Haq, Post-Doctoral Fellowship Researcher School of Journalism, Media and Culture at Cardiff University.

When: 4 pm, Central

Where: Zoom

Cost: Free

Sponsor: School of Journalism, Media and Culture at Cardiff University and the Centre for Media Monitoring

More Info

 

Wed, June 21 - Escaping toxic newsroom spaces and online hate

Who: Dhanya Rajendran, Editor-in-Chief, The News Minute.

When: 8 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Reuter’s Institute

More Info

 

Wed, June 21 - Data in Action: How Your Agency Can Put Information To Work

What: Explore how employees can harness the power of data securely and efficiently to make more effective pitches. 

Who: Marcus Thornton, Deputy Chief Data Officer, Commonwealth of Virginia; Ian Lee, High Performance Computing Security Architect, Lawrence Livermore National Laboratory; Evan Albert, Director of Measurement and Data Analytics, Department of Veterans Affairs and others.

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: GovLoop

More Info

 

Thu, June 22 - Strategic Innovation: How Do I Plan When I Don't Know What's Coming?

What: Participants will walk away with actionable frameworks to help assess new opportunities, allowing you to prioritize and accelerate innovation in your own organization.

Who: Linton Myers, Director of Innovation and Incubation at Blackbaud with Kelley Hecht, Team Lead of Nonprofit Industry Advisors at AWS.

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: Blackbaud (a software provider focused on powering social impact)

More Info

 

Mon, June 26 - Fuel Your Funding with Data-Driven Program Evaluation Reporting

What: This workshop will help you unlock and leverage the power of your program data. The steps to consolidate, analyze, and visualize your program information to create data-driven messaging that will fuel more program funding from grants, partners, and major gifts donors.  

Who: Sarah Merion, Impact Aligned

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: The Nonprofit Learning Lab

More Info

 

Tue, June 27 - Intellectual Property & Contract Considerations for PR Firms Using Generative AI

What: In this session, attorneys will cover how these new technologies—built on machine learning algorithms—could fundamentally change the communications and marketing industry and share best practices for considering their usage as business tools.   

Who: Michael C. Lasky, Chair, Public Relations Law and Partner/Co-Chair, Litigation + Dispute Resolution, Davis+Gilbert; Samantha Rothaus, Partner, Advertising + Marketing, Davis+Gilbert; Andrew Richman, Associate, Advertising + Marketing, Davis+Gilbert LLP  

When: 4 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Institutes for Public Relations

More Info

 

Tue, June 27 - AI and Phishing: What’s the Risk to Your Organization?

What: The panel will discuss the advances in chatbot technology and how organizations must adapt to avoid falling victim to this new wave of phishing attacks. Key Takeaways: Sorting the fact from the fiction: how can AI be used in phishing? Real-world phishing statics: can attacks really be attributed to AI? The defenses in place today: are they enough? What can organizations do to protect themselves?

Who: James Dyer, Cyber Intelligence Analyst, Egress; Ernie Castellanos, Cybersecurity Manager, San Ysidro Health; Duncan MacRae, Editor in Chief techForge Media; Samuel Ojeme, Director of Product Managmenet, Mastercard

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Tech Forge

More Info

Wed, June 28 - Beyond Snapshots: Photo Skills For Beginners 

What: Basic multimedia techniques for journalists looking to expand their skillset. Topics will include basic elements of photography, best practices for photojournalism and beginner-level editing. The event will end with a question-and-answer segment.

Who: Freelance Community board members Solomon O. Smith and Chris Belcher

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Society of Professional Journalists

More Info

8 good quotes about students cheating with AI   

Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate? Chronicle of Higher Ed 

What about students cheating by using ChatGPT instead of doing their own writing? The thing about technology is that it is interfering with the very weak proxies we have of measuring student learning, namely homework and tests. (Generative AI) is just another reminder that it’s actually really hard to know how much someone has learned something, and especially if we’re not talking to them directly but relying on some scaled up automated or nearly automated system to measure it for us. MathBabe Cathy O’Neil

Sometimes, though, professors who felt they had pretty strong evidence of AI usage were met with excuses, avoidance, or denial. Bridget Robinson-Riegler, a psychology professor at Augsburg University, in Minnesota, caught some obvious cheating (one student forgot to take out a reference ChatGPT had made to itself) and gave those students zeros. But she also found herself having to give passing grades to others even though she was pretty sure their work had been generated by AI (the writings were almost identical to each other). Chronicle of Higher Ed 

As professors of educational psychology and educational technology, we’ve found that the main reason students cheat is their academic motivation. The decision to cheat or not, therefore, often relates to how academic assignments and tests are constructed and assessed, not on the availability of technological shortcuts. When they have the opportunity to rewrite an essay or retake a test if they don’t do well initially, students are less likely to cheat. The Conversation

Lorie Paldino, an assistant professor of English and digital communications at the University of Saint Mary, in Leavenworth, Kan., described how she asked one student, who had submitted an argument-based research essay, to bring to her the printed and annotated articles they used for research, along with the bibliography, outline, and other supporting work. Paldino then explained to the student why the essay fell short: It was formulaic, inaccurate, and lacked necessary detail. The professor concluded with showing the student the Turnitin results and the student admitted to using AI. Chronicle of Higher Ed 

Our research demonstrates that students are more likely to cheat when assignments are designed in ways that encourage them to outperform their classmates. In contrast, students are less likely to cheat when teachers assign academic tasks that prompt them to work collaboratively and to focus on mastering content instead of getting a good grade. The Conversation

A common finding (from our survey): Professors realized they needed to get on top of the issue more quickly. It wasn’t enough to wait until problems arose, some wrote, or to simply add an AI policy to their syllabus. They had to talk through scenarios with their students. Chronicle of Higher Ed 

Matthew Swagler, an assistant professor of history at Connecticut College, had instituted a policy that students could use a large language model for assistance, but only if they cited its usage. But that wasn’t sufficient to prevent misuse, he realized, nor prevent confusion among students about what was acceptable. He initiated a class discussion, which was beneficial: “It became clear that the line between which AI is acceptable and which is not is very blurry, because AI is being integrated into so many apps and programs we use.”  Chronicle of Higher Ed

Thoughtful discourse on college campuses

The capacity to entertain different views is vital not only on a college campus but also in a pluralistic and democratic society. With shouting matches replacing thoughtful debate everywhere, from the halls of Congress to school-board meetings, a college campus might be the last, best place where students can learn to converse, cooperate, and coexist with people who see the world differently. 

The University of Chicago famously enshrined this principle in a 2014 report by a faculty committee charged with articulating the university’s commitment to uninhibited debate. “It is not the proper role of the university,” the Chicago Principles read, “to attempt to shield individuals from ideas and opinions they find unwelcome, disagreeable, or even deeply offensive.” 

Daniel Diermeier writing in the Chronicle of Higher Ed

Struggling for Knowledge

According to a 1995 study, a sample of Japanese eighth graders spent 44 percent of their class time inventing, thinking, and actively struggling with underlying concepts. the study’s sample of American students, on the other hand, spend less than one percent of their time in that state.

 “The Japanese want their kids to struggle,” said Jim Stigler, the UCLA professor who oversaw the study and who co-wrote The Teaching Gap with James Hiebert. “Sometimes the (Japanese) teacher will purposely give the wrong answer so the kids can grapple with the theory. American teachers, though, worked like waiters. Whenever there was a struggle, they wanted to move past it, make sure the class kept gliding along. But you don't learn by gliding.”

Daniel Coyle, The Talent Code

30 Great Quotes about AI & Education

ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content. The tool seems more beneficial for those who already have a lot of experience writing–not those learning how to develop ideas, organize thinking, support propositions with evidence, conduct independent research, and so on. Critical AI

The question isn’t “How will we get around this?” but rather “Is this still worth doing?” The Atlantic

The reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Chronicle of Higher Ed

If you’re a college student preparing for life in an A.I. world, you need to ask yourself: Which classes will give me the skills that machines will not replicate, making me more distinctly human? New York Times 

The student who is using it because they lack the expertise is exactly the student who is not ready to assess what it’s doing critically. Chronicle of Higher Ed 

It used to be about mastery of content. Now, students need to understand content, but it’s much more about mastery of the interpretation and utilization of the content. Inside Higher Ed

Don’t fixate on how much evidence you have but on how much evidence will persuade your intended audience. ChatGPT distills everything on the internet through its filter and dumps it on the reader; your flawed and beautiful mind, by contrast, makes its mark on your subject by choosing the right evidence, not all the evidence. Chronicle of Higher Ed 

The more effective, and increasingly popular, strategy is to tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Then rewrite them yourself to make them flow better. Chronicle of Higher Ed

A.I. will force us humans to double down on those talents and skills that only humans possess. The most important thing about A.I. may be that it shows us what it can’t do, and so reveals who we are and what we have to offer. New York Times

Even if detection software gets better at detecting AI generated text, it still causes mental and emotional strain when a student is wrongly accused. “False positives carry real harm,” he said. “At the scale of a course, or at the scale of the university, even a one or 2% rate of false positives will negatively impact dozens or hundreds of innocent students.” Washington Post

Ideas are more important than how they are written. So, I use ChatGPT to help me organize my ideas better and make them sound more professional. The Tech Insider

A.I. is good at predicting what word should come next, so you want to be really good at being unpredictable, departing from the conventional. New York Times 

We surpass the AI by standing on its shoulders. You need to ask, ‘How is it possibly incomplete?’” Inside Higher Ed

Our students are not John Henry, and AI is not a steam-powered drilling machine that will replace them. We don’t need to exhaust ourselves trying to surpass technology. Inside Higher Ed

These tools can function like personal assistants: Ask ChatGPT to create a study schedule, simplify a complex idea, or suggest topics for a research paper, and it can do that. That could be a boon for students who have trouble managing their time, processing information, or ordering their thoughts. Chronicle of Higher Ed

If the data set of writing on which the writing tool is trained reflects societal prejudices, then the essays it produces will likely reproduce those views. Similarly, if the training sets underrepresent the views of marginalized populations, then the essays they produce may omit those views as well. Inside Higher Ed

Students may be more likely to complete an assignment without automated assistance if they’ve gotten started through in-class writing. Critical AI

Rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Chronicle of Higher Ed

“I think we should just get used to the fact that we won’t be able to reliably tell if a document is either written by AI — or partially written by AI, or edited by AI — or by humans,” computer science professor Soheil Feizi said. Washington Post

(A professor) plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.New York Times

ChatGPT can play the role of a debate opponent and generate counterarguments to a student’s positions. By exposing students to an endless supply of opposing viewpoints, chatbots could help them look for weak points in their own thinking. MIT Tech Review

Assign reflection to help students understand their own thought processes and motivations for using these tools, as well as the impact AI has on their learning and writing. Inside Higher Ed 

Discuss students’ potentially diverse motivations for using ChatGPT or other generative AI software. Do they arise from stress about the writing and research process? Time management on big projects? Competition with other students? Experimentation and curiosity about using AI? Grade and/or other pressures and/or burnout? Invite your students to have an honest discussion about these and related questions. Cultivate an environment in your course in which students will feel comfortable approaching you if they need more direct support from you, their peers, or a campus resource to successfully complete an assignment. Barnard College 

We will need to teach students to contest it. Students in every major will need to know how to challenge or defend the appropriateness of a given model for a given question. To teach them how to do that, we don’t need to hastily construct a new field called “critical AI studies.” The intellectual resources students need are already present in the history and philosophy of science courses, along with the disciplines of statistics and machine learning themselves, which are deeply self-conscious about their own epistemic procedures. Chronicle of Higher Ed

We should be telling our undergraduates that good writing isn’t just about subject-verb agreement or avoiding grammatical errors—not even good academic writing. Good writing reminds us of our humanity, the humanity of others and all the ugly, beautiful ways in which we exist in the world. Inside Higher Ed 

Rather than trying to stop the tools and, for instance, telling students not to use them, in my class I’m telling students to embrace them – but I expect their quality of work to be that much better now they have the help of these tools. Ultimately, by the end of the semester, I'm expecting the students to turn in assignments that are substantially more creative and interesting than the ones last year’s students or previous generations of students could have created. We Forum 

Training ourselves and our students to work with AI doesn’t require inviting AI to every conversation we have. In fact, I believe it’s essential that we don’t.  Inside Higher Ed

If a professor runs students’ work through a detector without informing them in advance, that could be an academic-integrity violation in itself.  The student could then appeal the decision on grounds of deceptive assessment, “and they would probably win.” Chronicle of Higher Ed

How might chatting with AI systems affect vulnerable students, including those with depression, anxiety, and other mental-health challenges? Chronicle of Higher Ed 

Are we going to fill the time saved by AI with other low-value tasks, or will it free us to be more disruptive in our thinking and doing? I have some unrealistically high hopes of what AI can deliver. I want low-engagement tasks to take up less of my working day, allowing me to do more of what I need to do to thrive (thinking, writing, discussing science with colleagues). Nature

Let Kids Struggle

When children aren’t given the space to struggle through things on their own, they don’t learn to problem solve very well. They don’t learn to be confident in their own abilities, and it can affect their self-esteem. The other problem with never having to struggle is that you never experience failure and can develop an overwhelming fear of failure and of disappointing others. Both the low self-confidence and the fear of failure can lead to depression or anxiety.

I (am not) suggesting that grown kids should never call their parents. The devil is in the details of the conversation. If they call with a problem or a decision to be made, do we tell them what to do? Or do we listen thoughtfully, ask some questions based on our own sense of the situation, then say, “OK. So how do you think you’re going to handle that?”

Knowing what could unfold for our kids when they’re out of our sight can make us parents feel like we’re in straitjackets. What else are we supposed to do? If we’re not there for our kids when they are away from home and bewildered, confused, frightened, or hurting, then who will be?

Here’s the point—and this is so much more important than I realized until rather recently when the data started coming in: The research shows that figuring out for themselves is a critical element to people’s mental health. Your kids have to be there for themselves. That’s a harder truth to swallow when your kid is in the midst of a problem or worse, a crisis, but taking the long view, it’s the best medicine for them.

Julie Lythcott-Haims, How to Raise an Adult

17 articles about AI & Academic Scholarship

Scientific authorship in the time of ChatGPT - Chemistry

AI could rescue scientific papers from the curse of jargon – Free Think

Science journals ban listing of ChatGPT as co-author on papers – The Guardian

ChatGPT listed as author on research papers: many scientists disapprove – Nature (subscription req)

Abstracts written by ChatGPT fool scientists – Nature (subscription req)

The World Association of Medical Editors has created guidelines for the use of ChatGPT and other chatbots - Medscape (sub req)  

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation

It’s Not Just Our Students — ChatGPT Is Coming for Faculty Writing – Chronicle of Higher Ed 

As scientists explore AI-written text, journals hammer out policies – Science

AI writing tools could hand scientists the ‘gift of time’ – Nature

ChatGPT Is Everywhere Love it or hate it, academics can’t ignore the already pervasive technology– Chronicle of Higher Ed

Academic Publishers Are Missing the Point on ChatGPT – Scholarly Kitchen

AI Is Impacting Education, but the Best Is Yet to Come – Inside Higher Ed 

AI makes plagiarism harder to detect, argue academics – in paper written by chatbot – The Guardian

How to Cite ChatGPT – APA Style

Researchers claim to have developed tool capable of detecting scientific text generated by ChatGPT with 99% accuracy – University of Kansas

ChatGPT: five priorities for research – The Journal Nature

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

14 quotes worth reading about AI use in academic papers                       

13 Quotes worth reading about AI’s impact on College Administrators & Faculty

17 articles about AI & Academic Scholarship