Hearty Laughter

A good belly laugh has a rallying effect that no chuckle can match. A British study in 2011 showed that, like sex and exercise, the physical effort of uncontrollable laughter makes our brains release chemicals called endorphins, which relax us and relieve pain. It is “the emptying of the lungs that causes” the feel-good effect, not just the thought of something funny, evolutionary psychologist Robin Dunbar tells BBCNews.com.

He and his colleagues at Oxford University asked volunteers to watch either a comedy or a documentary, and then applied painful levels of cold or pressure to their arms. The volunteers who had laughed hard during their videos could withstand 10 percent more pain than those who’d only giggled or who hadn’t been amused at all.

The Week Magazine

10 Free Media Webinars in the next 10 Days: social media, AI, journalism, media law, photography & more

Tue, June 20 – Social Media 102

What: Learn a few advanced social media tips and tricks, elevate your social media presence through micro strategies and activate your advocates. Join us to learn how to: Use social media to connect with constituents. Monitor conversations to stay ahead of the curve. Get people to advocate on your behalf. Navigate social media advertising and understand when to use it.

Who: Firespring Director of Nonprofit Solutions Kiersten Hill

When: 2 pm, Central

Where: Zoom

Cost: Free

Sponsor: Firespring

More Info

 

Tue, June 20 - AI research: An anthropological lens

What: This session will offer several discussion points to comprehend the gains of an anthropological perspective in unpacking AI in educational environments.

Who: Dr Nimmi Rangaswamy, professor at the Kohli Centre on Intelligent Systems, Indian Institute of Information Technology, IIIT, Hyderabad

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Tue, June 20 - Online workshop for local journalists and Muslim community group

What: The workshop is designed to help both local journalists and Muslim organisations to share and learn about best practice when it comes to reporting on stories involving Muslims and Islam. It will facilitate discussions between local journalists from across the UK with local Muslim community groups to explore better ways of working together to ensure balanced and fair reporting in the local media.

Who: Nadia Haq, Post-Doctoral Fellowship Researcher School of Journalism, Media and Culture at Cardiff University.

When: 4 pm, Central

Where: Zoom

Cost: Free

Sponsor: School of Journalism, Media and Culture at Cardiff University and the Centre for Media Monitoring

More Info

 

Wed, June 21 - Escaping toxic newsroom spaces and online hate

Who: Dhanya Rajendran, Editor-in-Chief, The News Minute.

When: 8 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Reuter’s Institute

More Info

 

Wed, June 21 - Data in Action: How Your Agency Can Put Information To Work

What: Explore how employees can harness the power of data securely and efficiently to make more effective pitches. 

Who: Marcus Thornton, Deputy Chief Data Officer, Commonwealth of Virginia; Ian Lee, High Performance Computing Security Architect, Lawrence Livermore National Laboratory; Evan Albert, Director of Measurement and Data Analytics, Department of Veterans Affairs and others.

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: GovLoop

More Info

 

Thu, June 22 - Strategic Innovation: How Do I Plan When I Don't Know What's Coming?

What: Participants will walk away with actionable frameworks to help assess new opportunities, allowing you to prioritize and accelerate innovation in your own organization.

Who: Linton Myers, Director of Innovation and Incubation at Blackbaud with Kelley Hecht, Team Lead of Nonprofit Industry Advisors at AWS.

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: Blackbaud (a software provider focused on powering social impact)

More Info

 

Mon, June 26 - Fuel Your Funding with Data-Driven Program Evaluation Reporting

What: This workshop will help you unlock and leverage the power of your program data. The steps to consolidate, analyze, and visualize your program information to create data-driven messaging that will fuel more program funding from grants, partners, and major gifts donors.  

Who: Sarah Merion, Impact Aligned

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: The Nonprofit Learning Lab

More Info

 

Tue, June 27 - Intellectual Property & Contract Considerations for PR Firms Using Generative AI

What: In this session, attorneys will cover how these new technologies—built on machine learning algorithms—could fundamentally change the communications and marketing industry and share best practices for considering their usage as business tools.   

Who: Michael C. Lasky, Chair, Public Relations Law and Partner/Co-Chair, Litigation + Dispute Resolution, Davis+Gilbert; Samantha Rothaus, Partner, Advertising + Marketing, Davis+Gilbert; Andrew Richman, Associate, Advertising + Marketing, Davis+Gilbert LLP  

When: 4 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Institutes for Public Relations

More Info

 

Tue, June 27 - AI and Phishing: What’s the Risk to Your Organization?

What: The panel will discuss the advances in chatbot technology and how organizations must adapt to avoid falling victim to this new wave of phishing attacks. Key Takeaways: Sorting the fact from the fiction: how can AI be used in phishing? Real-world phishing statics: can attacks really be attributed to AI? The defenses in place today: are they enough? What can organizations do to protect themselves?

Who: James Dyer, Cyber Intelligence Analyst, Egress; Ernie Castellanos, Cybersecurity Manager, San Ysidro Health; Duncan MacRae, Editor in Chief techForge Media; Samuel Ojeme, Director of Product Managmenet, Mastercard

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Tech Forge

More Info

Wed, June 28 - Beyond Snapshots: Photo Skills For Beginners 

What: Basic multimedia techniques for journalists looking to expand their skillset. Topics will include basic elements of photography, best practices for photojournalism and beginner-level editing. The event will end with a question-and-answer segment.

Who: Freelance Community board members Solomon O. Smith and Chris Belcher

When: 7 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Society of Professional Journalists

More Info

8 good quotes about students cheating with AI   

Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate? Chronicle of Higher Ed 

What about students cheating by using ChatGPT instead of doing their own writing? The thing about technology is that it is interfering with the very weak proxies we have of measuring student learning, namely homework and tests. (Generative AI) is just another reminder that it’s actually really hard to know how much someone has learned something, and especially if we’re not talking to them directly but relying on some scaled up automated or nearly automated system to measure it for us. MathBabe Cathy O’Neil

Sometimes, though, professors who felt they had pretty strong evidence of AI usage were met with excuses, avoidance, or denial. Bridget Robinson-Riegler, a psychology professor at Augsburg University, in Minnesota, caught some obvious cheating (one student forgot to take out a reference ChatGPT had made to itself) and gave those students zeros. But she also found herself having to give passing grades to others even though she was pretty sure their work had been generated by AI (the writings were almost identical to each other). Chronicle of Higher Ed 

As professors of educational psychology and educational technology, we’ve found that the main reason students cheat is their academic motivation. The decision to cheat or not, therefore, often relates to how academic assignments and tests are constructed and assessed, not on the availability of technological shortcuts. When they have the opportunity to rewrite an essay or retake a test if they don’t do well initially, students are less likely to cheat. The Conversation

Lorie Paldino, an assistant professor of English and digital communications at the University of Saint Mary, in Leavenworth, Kan., described how she asked one student, who had submitted an argument-based research essay, to bring to her the printed and annotated articles they used for research, along with the bibliography, outline, and other supporting work. Paldino then explained to the student why the essay fell short: It was formulaic, inaccurate, and lacked necessary detail. The professor concluded with showing the student the Turnitin results and the student admitted to using AI. Chronicle of Higher Ed 

Our research demonstrates that students are more likely to cheat when assignments are designed in ways that encourage them to outperform their classmates. In contrast, students are less likely to cheat when teachers assign academic tasks that prompt them to work collaboratively and to focus on mastering content instead of getting a good grade. The Conversation

A common finding (from our survey): Professors realized they needed to get on top of the issue more quickly. It wasn’t enough to wait until problems arose, some wrote, or to simply add an AI policy to their syllabus. They had to talk through scenarios with their students. Chronicle of Higher Ed 

Matthew Swagler, an assistant professor of history at Connecticut College, had instituted a policy that students could use a large language model for assistance, but only if they cited its usage. But that wasn’t sufficient to prevent misuse, he realized, nor prevent confusion among students about what was acceptable. He initiated a class discussion, which was beneficial: “It became clear that the line between which AI is acceptable and which is not is very blurry, because AI is being integrated into so many apps and programs we use.”  Chronicle of Higher Ed

Thoughtful discourse on college campuses

The capacity to entertain different views is vital not only on a college campus but also in a pluralistic and democratic society. With shouting matches replacing thoughtful debate everywhere, from the halls of Congress to school-board meetings, a college campus might be the last, best place where students can learn to converse, cooperate, and coexist with people who see the world differently. 

The University of Chicago famously enshrined this principle in a 2014 report by a faculty committee charged with articulating the university’s commitment to uninhibited debate. “It is not the proper role of the university,” the Chicago Principles read, “to attempt to shield individuals from ideas and opinions they find unwelcome, disagreeable, or even deeply offensive.” 

Daniel Diermeier writing in the Chronicle of Higher Ed

Struggling for Knowledge

According to a 1995 study, a sample of Japanese eighth graders spent 44 percent of their class time inventing, thinking, and actively struggling with underlying concepts. the study’s sample of American students, on the other hand, spend less than one percent of their time in that state.

 “The Japanese want their kids to struggle,” said Jim Stigler, the UCLA professor who oversaw the study and who co-wrote The Teaching Gap with James Hiebert. “Sometimes the (Japanese) teacher will purposely give the wrong answer so the kids can grapple with the theory. American teachers, though, worked like waiters. Whenever there was a struggle, they wanted to move past it, make sure the class kept gliding along. But you don't learn by gliding.”

Daniel Coyle, The Talent Code

30 Great Quotes about AI & Education

ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content. The tool seems more beneficial for those who already have a lot of experience writing–not those learning how to develop ideas, organize thinking, support propositions with evidence, conduct independent research, and so on. Critical AI

The question isn’t “How will we get around this?” but rather “Is this still worth doing?” The Atlantic

The reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Chronicle of Higher Ed

If you’re a college student preparing for life in an A.I. world, you need to ask yourself: Which classes will give me the skills that machines will not replicate, making me more distinctly human? New York Times 

The student who is using it because they lack the expertise is exactly the student who is not ready to assess what it’s doing critically. Chronicle of Higher Ed 

It used to be about mastery of content. Now, students need to understand content, but it’s much more about mastery of the interpretation and utilization of the content. Inside Higher Ed

Don’t fixate on how much evidence you have but on how much evidence will persuade your intended audience. ChatGPT distills everything on the internet through its filter and dumps it on the reader; your flawed and beautiful mind, by contrast, makes its mark on your subject by choosing the right evidence, not all the evidence. Chronicle of Higher Ed 

The more effective, and increasingly popular, strategy is to tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Then rewrite them yourself to make them flow better. Chronicle of Higher Ed

A.I. will force us humans to double down on those talents and skills that only humans possess. The most important thing about A.I. may be that it shows us what it can’t do, and so reveals who we are and what we have to offer. New York Times

Even if detection software gets better at detecting AI generated text, it still causes mental and emotional strain when a student is wrongly accused. “False positives carry real harm,” he said. “At the scale of a course, or at the scale of the university, even a one or 2% rate of false positives will negatively impact dozens or hundreds of innocent students.” Washington Post

Ideas are more important than how they are written. So, I use ChatGPT to help me organize my ideas better and make them sound more professional. The Tech Insider

A.I. is good at predicting what word should come next, so you want to be really good at being unpredictable, departing from the conventional. New York Times 

We surpass the AI by standing on its shoulders. You need to ask, ‘How is it possibly incomplete?’” Inside Higher Ed

Our students are not John Henry, and AI is not a steam-powered drilling machine that will replace them. We don’t need to exhaust ourselves trying to surpass technology. Inside Higher Ed

These tools can function like personal assistants: Ask ChatGPT to create a study schedule, simplify a complex idea, or suggest topics for a research paper, and it can do that. That could be a boon for students who have trouble managing their time, processing information, or ordering their thoughts. Chronicle of Higher Ed

If the data set of writing on which the writing tool is trained reflects societal prejudices, then the essays it produces will likely reproduce those views. Similarly, if the training sets underrepresent the views of marginalized populations, then the essays they produce may omit those views as well. Inside Higher Ed

Students may be more likely to complete an assignment without automated assistance if they’ve gotten started through in-class writing. Critical AI

Rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Chronicle of Higher Ed

“I think we should just get used to the fact that we won’t be able to reliably tell if a document is either written by AI — or partially written by AI, or edited by AI — or by humans,” computer science professor Soheil Feizi said. Washington Post

(A professor) plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.New York Times

ChatGPT can play the role of a debate opponent and generate counterarguments to a student’s positions. By exposing students to an endless supply of opposing viewpoints, chatbots could help them look for weak points in their own thinking. MIT Tech Review

Assign reflection to help students understand their own thought processes and motivations for using these tools, as well as the impact AI has on their learning and writing. Inside Higher Ed 

Discuss students’ potentially diverse motivations for using ChatGPT or other generative AI software. Do they arise from stress about the writing and research process? Time management on big projects? Competition with other students? Experimentation and curiosity about using AI? Grade and/or other pressures and/or burnout? Invite your students to have an honest discussion about these and related questions. Cultivate an environment in your course in which students will feel comfortable approaching you if they need more direct support from you, their peers, or a campus resource to successfully complete an assignment. Barnard College 

We will need to teach students to contest it. Students in every major will need to know how to challenge or defend the appropriateness of a given model for a given question. To teach them how to do that, we don’t need to hastily construct a new field called “critical AI studies.” The intellectual resources students need are already present in the history and philosophy of science courses, along with the disciplines of statistics and machine learning themselves, which are deeply self-conscious about their own epistemic procedures. Chronicle of Higher Ed

We should be telling our undergraduates that good writing isn’t just about subject-verb agreement or avoiding grammatical errors—not even good academic writing. Good writing reminds us of our humanity, the humanity of others and all the ugly, beautiful ways in which we exist in the world. Inside Higher Ed 

Rather than trying to stop the tools and, for instance, telling students not to use them, in my class I’m telling students to embrace them – but I expect their quality of work to be that much better now they have the help of these tools. Ultimately, by the end of the semester, I'm expecting the students to turn in assignments that are substantially more creative and interesting than the ones last year’s students or previous generations of students could have created. We Forum 

Training ourselves and our students to work with AI doesn’t require inviting AI to every conversation we have. In fact, I believe it’s essential that we don’t.  Inside Higher Ed

If a professor runs students’ work through a detector without informing them in advance, that could be an academic-integrity violation in itself.  The student could then appeal the decision on grounds of deceptive assessment, “and they would probably win.” Chronicle of Higher Ed

How might chatting with AI systems affect vulnerable students, including those with depression, anxiety, and other mental-health challenges? Chronicle of Higher Ed 

Are we going to fill the time saved by AI with other low-value tasks, or will it free us to be more disruptive in our thinking and doing? I have some unrealistically high hopes of what AI can deliver. I want low-engagement tasks to take up less of my working day, allowing me to do more of what I need to do to thrive (thinking, writing, discussing science with colleagues). Nature

Let Kids Struggle

When children aren’t given the space to struggle through things on their own, they don’t learn to problem solve very well. They don’t learn to be confident in their own abilities, and it can affect their self-esteem. The other problem with never having to struggle is that you never experience failure and can develop an overwhelming fear of failure and of disappointing others. Both the low self-confidence and the fear of failure can lead to depression or anxiety.

I (am not) suggesting that grown kids should never call their parents. The devil is in the details of the conversation. If they call with a problem or a decision to be made, do we tell them what to do? Or do we listen thoughtfully, ask some questions based on our own sense of the situation, then say, “OK. So how do you think you’re going to handle that?”

Knowing what could unfold for our kids when they’re out of our sight can make us parents feel like we’re in straitjackets. What else are we supposed to do? If we’re not there for our kids when they are away from home and bewildered, confused, frightened, or hurting, then who will be?

Here’s the point—and this is so much more important than I realized until rather recently when the data started coming in: The research shows that figuring out for themselves is a critical element to people’s mental health. Your kids have to be there for themselves. That’s a harder truth to swallow when your kid is in the midst of a problem or worse, a crisis, but taking the long view, it’s the best medicine for them.

Julie Lythcott-Haims, How to Raise an Adult

17 articles about AI & Academic Scholarship

Scientific authorship in the time of ChatGPT - Chemistry

AI could rescue scientific papers from the curse of jargon – Free Think

Science journals ban listing of ChatGPT as co-author on papers – The Guardian

ChatGPT listed as author on research papers: many scientists disapprove – Nature (subscription req)

Abstracts written by ChatGPT fool scientists – Nature (subscription req)

The World Association of Medical Editors has created guidelines for the use of ChatGPT and other chatbots - Medscape (sub req)  

ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it – The Conversation

It’s Not Just Our Students — ChatGPT Is Coming for Faculty Writing – Chronicle of Higher Ed 

As scientists explore AI-written text, journals hammer out policies – Science

AI writing tools could hand scientists the ‘gift of time’ – Nature

ChatGPT Is Everywhere Love it or hate it, academics can’t ignore the already pervasive technology– Chronicle of Higher Ed

Academic Publishers Are Missing the Point on ChatGPT – Scholarly Kitchen

AI Is Impacting Education, but the Best Is Yet to Come – Inside Higher Ed 

AI makes plagiarism harder to detect, argue academics – in paper written by chatbot – The Guardian

How to Cite ChatGPT – APA Style

Researchers claim to have developed tool capable of detecting scientific text generated by ChatGPT with 99% accuracy – University of Kansas

ChatGPT: five priorities for research – The Journal Nature

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

14 quotes worth reading about AI use in academic papers                       

13 Quotes worth reading about AI’s impact on College Administrators & Faculty

17 articles about AI & Academic Scholarship            

13 Quotes worth reading about AI’s impact on College Administrators & Faculty

What about us humble professors? Those of us with tenure have nothing to worry about. Taxpayers and donors will keep funding us no matter how useless we become. If you don’t have tenure, students will keep coming and your job will go on — unless you’re at a mediocre private college with a small endowment. Chronicle of Higher Ed

Colleges need to learn how to rely on technology. While much of the discussion has focused on what generative AI means for teaching, learning, and research, its immediate impact will likely be felt on functions outside of the academic core. Chronicle of Higher Ed

Most colleges accept most students who apply using a selection process that is routine and predictable. AI could be trained to make decisions about who gets accepted — or at least make the first cut of applicants. Yes, colleges will still need humans for recruiting, but even there, AI is increasingly capable of finding and marketing to prospective students. Chronicle of Higher Ed

Colleges have already started to deploy AI-powered chatbots to answer students’ everyday questions and help them show up for classes. Saint Louis University, for instance, added smart devices to dorm rooms that have been programmed to answer more than 600 questions from “What time does the library close tonight?” to “Where is the registrar’s office?” The next iteration of these chatbots is to personalize them to answer questions that are specific to a student (“When is my history exam?”) and bring them into the classroom. Chronicle of Higher Ed

AI can be used to tackle administrative functions from financial aid to the registrar’s office. At Arizona State University, AI is rewriting course descriptions to make them more informative for prospective students and improve search performance on the web. Chronicle of Higher Ed

Officials at companies that provide AI services to higher education tell me that colleges are sometimes reluctant to buy the products because they don’t want them to be seen as replacing people. But until campuses use AI in that way — to take over for people in jobs that involve processing information or doing repeatable tasks — then we won’t reverse or slow down the upward-cost trajectory of higher education, where most tuition dollars are now spent on functions outside of the classroom. Chronicle of Higher Ed

Rolling out AI software that can map prior admissions decisions, assess the performance of current students with similar profiles, and make preliminary recommendations will allow admissions officers to spend far less time reading essays and combing through student activities. Chronicle of Higher Ed

Vanderbilt University's Peabody School has apologized to students for using artificial intelligence to write an email about a mass shooting at another university, saying the distribution of the note did not follow the school's usual processes. CNN 

That the same entrepreneurs marketing text generators for writing papers market the same systems for grading papers suggests a bizarre software-to-software relay, with hardly a human in the loop. Who would benefit from such “education”? Public Books

The AI-in-education market is expected to grow from approximately $2 billion in 2022 to more than $25 billion in 2030, with North America accounting for the largest share. Inside Higher Ed 

What if we rearranged our universities around departments of critical thinking rather than departments of chemistry? Create a school of applied ethics rather than a school of business? We can create certificates for innovation and creative thinking that challenge our students to think like humans, not computers. We also need to ensure part of higher education is the development of human relationships. Businesses have been clamoring for this for years, but higher education still treats soft skills as a condiment, not the main course. Inside Higher Ed 

If those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do? Chronicle of Higher Ed

Institutions will need to have their needs and priorities clear … before buying marking machines or teaching robots or any other such thing. EdSurge

For science and the process of grant writing to be improved, two things have to happen: first, the pointless sections (those that might as well have been written by a computer, and could just as easily be answered by one) need to be removed; and second, the sections that remain need to be changed in scope, to be shorter and action-centred. Nature

Are we going to fill the time saved by AI with other low-value tasks, or will it free us to be more disruptive in our thinking and doing? I have some unrealistically high hopes of what AI can deliver. I want low-engagement tasks to take up less of my working day, allowing me to do more of what I need to do to thrive (thinking, writing, discussing science with colleagues). And then, because I won’t have a Sisyphean to-do list, I’ll be able to go home earlier — because I’ll have got more of the thinking, writing and discussing done during working hours, rather than having to fit them around the edges. Nature

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

14 quotes worth reading about AI use in academic papers                       

17 articles about AI & Academic Scholarship            

Businesses Blaming the AI

Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place. 

Ted Chiang writing in The New Yorker

A Work Performance Predicter

Last summer, Gallup released the results of a survey that asked employees whether they had a "best friend" at work. In short, only 2 out of 10 employees said they "strongly agree" with the idea that they do in fact have one.

The idea is "controversial," according to Gallup. "But one stubborn fact about this element of engagement cannot be denied: It predicts performance." 

Specifically, Gallup says answering yes to the "best friend at work" question" can help with other specific areas of employee and engagement, including whether employees reported liking their coworkers in general, being recognized for success, and even just whether they "had a lot of enjoyment" at work on a given day.

Bill Murphy writing in his newsletter Understandably

14 quotes worth reading about AI use in academic papers

ScienceElsevier and Nature were quick to react, updating their respective editorial and publishing policies, stating unconditionally that ChatGPT can’t be listed as an author on an academic paper. It is very hard to define exactly how GPT is used in a particular study as some publishers demand, the same way it is near impossible for authors to detail how they used Google as part of their research. Scholarly Kitchen

An app I have found useful every day is Perplexity. I am most taken with the auto-embedded citations of sources in the response, much like we do in research papers. This is most useful for deeper digging into topics. Inside Higher Ed 

Tools such as Grammarly, Writeful, and even Microsoft grammar checker are relied upon heavily by authors. If an author is using GPT for language purposes, why would that need to be declared and other tools not? What if authors get their ideas for new research from ChatGPT or have GPT analyze their results but write it up in their own words; might that be ok because the author is technically doing the writing? I believe that self-respecting researchers won’t use GPT as a primary source the same way they don’t use Wikipedia in that manner. However, they can use it in a myriad of other ways including brainstorming, sentence construction, data crunching, and more. The onus of responsibility for the veracity of information still falls on the researcher but that doesn’t mean we should run to ban because some might use it as a way to cut corners. Scholarly Kitchen

An academic paper entitled Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT was published this month in an education journal, describing how artificial intelligence (AI) tools “raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism”. What readers – and indeed the peer reviewers who cleared it for publication – did not know was that the paper itself had been written by the controversial AI chatbot ChatGPT. The Guardian

An application that holds great potential to those of us in higher ed is ChatPDF! It is what you might imagine, a tool that allows you to load a PDF of up to 120 pages in length. You can then apply the now-familiar ChatGPT analysis approach to the document itself. Ask for a summary. Dig into specifics. This will be a useful tool for reviewing research and efficiently understanding complex rulings and other legal documents. Inside Higher Ed

If you’ve used ChatGPT or other AI tools in your research, (for APA) describe (in your academic paper) how you used the tool in your Method section or in a comparable section of your paper. For literature reviews or other types of essays or response or reaction papers, you might describe how you used the tool in your introduction. In your text, provide the prompt you used and then any portion of the relevant text that was generated in response. You may also put the full text of long responses from ChatGPT in an appendix of your paper or in online supplemental materials, so readers have access to the exact text that was generated. If you create appendices or supplemental materials, remember that each should be called out at least once in the body of your APA Style paper. APA Style 

Outside of the most empirical subjects, the determinants of academic status will be uniquely human — networking and sheer charisma — making it a great time to reread Dale Carnegie’s How to Win Friends and Influence People. Chronicle of Higher Ed 

The US journal Science, announced an updated editorial policy, banning the use of text from ChatGPT and clarifying that the program could not be listed as an author. Leading scientific journals require authors to sign a form declaring that they are accountable for their contribution to the work. Since ChatGPT cannot do this, it cannot be an author. The Guardian

A chatbot was deemed capable of generating quality academic research ideas. This raises fundamental questions around the meaning of creativity and ownership of creative ideas — questions to which nobody yet has solid answers. Our suspicion here is that ChatGPT is particularly strong at taking a set of external texts and connecting them (the essence of a research idea), or taking easily identifiable sections from one document and adjusting them (an example is the data summary — an easily identifiable “text chunk” in most research studies). A relative weakness of the platform became apparent when the task was more complex - when there are too many stages to the conceptual process. The Conversation 

Already some researchers are using the technology. Among only the small sample of my work colleagues, I’ve learned that it is being used for such daily tasks as: translating code from one programming language to another, potentially saving hours spent searching web forums for a solution; generating plain-language summaries of published research, or identifying key arguments on a particular topic; and creating bullet points to pull into a presentation or lecture. Chronicle of Higher Ed 

For most professors, writing — even bad first drafts or outlines — requires our labor (and sometimes strain) to develop an original thought. If the goal is to write a paper that introduces boundary-breaking new ideas, AI tools might reduce some of the intellectual effort needed to make that happen. Some will see that as a smart use of time, not evidence of intellectual laziness. Chronicle of Higher Ed

The quality of scientific research will erode if academic publishers can't find ways to detect fake AI-generated images in papers. In the best-case scenario, this form of academic fraud will be limited to just paper mill schemes that don't receive much attention anyway. In the worst-case scenario, it will impact even the most reputable journals and scientists with good intentions will waste time and money chasing false ideas they believe to be true. The Register 

Many journals’ new policies require that authors disclose use of text-generating tools and ban listing a large language model such as ChatGPT as a co-author, to underscore the human author’s responsibility for ensuring the text’s accuracy. That is the case for Nature and all Springer Nature journalsthe JAMA Network, and groups that advise on best practices in publishing, such as the Committee on Publication Ethics and the World Association of Medical Editors. Science

Just as publishers begin to get a grip on manual image manipulation, another threat is emerging. Some researchers may be tempted to use generative AI models to create brand-new fake data rather than altering existing photos and scans. In fact, there is evidence to suggest that sham scientists may be doing this already. A spokesperson for Uncle Sam's defense research agency confirmed it has spotted fake medical images in published science papers that appear to be generated using AI. The Register

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

13 Quotes worth reading about AI’s impact on College Administrators & Faculty

17 articles about AI & Academic Scholarship                                        

7 Free Webinars this week about AI, journalism, enterprise & faith-based reporting & more

Mon, June 12 - Producing Accessible Digital Content for your Nonprofit

What: The presenters, both of whom live with vision loss, will demonstrate what end users experience when interacting with accessible and inaccessible content. The differences between accessible and usable content will also be discussed. Finally, the presenters will highlight tips and tricks for producing content that anyone can access and use.

Who: Jim Denham and Denise Jess of the Wisconsin Council of the Blind and Visually Impaired

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: Nonprofit Learning Lab

More Info

 

Wed, June 14 – Artificial Intelligence in Media & Publishing

What: Explore how to start applying AI to the customer lifecycle and scale capability by taking into account the elements which set AI apart from other fields of data analysis.

When: 11 am, BST (6 am, Eastern)

Where: Zoom

Cost: Free

Sponsor: FT Strategies

More Info

 

Wed, June 14 - The Future of Journalism

What: This webinar will shed light on how newspapers can continue to be a bastion of truth for the masses.

Who: Kevin Merida, Executive Editor of the Los Angeles Times; Tracy Williams (Moderator) Olmstead Williams Communications

When: 11:30 am, Pacific

Where: Zoom

Cost: Free

Sponsor: Society of Professional Journalists

More Info

 

Wed, June 14 - Generative AI’s Impact on Law 

What: This virtual briefing for journalists and lawyers alike will assess the risks and opportunities of generative AI.

Who: Marie C. Baca, adviser, Coastside News Group & New Mexico Local News Fund; Jamie Buckley, chief product officer, LexisNexis Legal & Professional; Calum Chase, author, “Surviving AI”; Jennifer Conrad, reporter, Inc; John D. Villasenor, professor and co-director, UCLA Institute for Technology, Law and Policy

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: The National Press Foundation and RELX, a global provider of analytics tools, including LexisNexis.

More Info

 

Wed, June 14 - Building your online profile in journalism and media

What: A workshop on how to enhance your online profile including your portfolio, LinkedIn, and other social media platforms.

Who: Emma Carew Grovum, the founder of Kimbap Media and The Marshall Project's Director of Careers and Culture.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Center for Cooperative Media

More Info

 

Wed, June 14 - Bringing solutions journalism into faith-based reporting

What: This workshop will explore techniques for bringing the solutions approach into faith-based news: How to identify stories; how to report them; and how to construct powerful narratives. Come with a great story idea you're hoping to pursue, or the issue you want to explore — and we'll help you make it happen.

Who: Bekah McNeel, a freelance journalist whose work has appeared in Christianity Today, the Public Justice Review, the Christian Science Monitor, Sojourners, and numerous local outlets; and Keith Hammonds, who leads SJN's work with faith-based media.

When: 12 noon, Eastern

Where: Zoom

Cost: Free

Sponsor: Solutions Journalism

More Info

 

Fri, June 16 - How to do more enterprise reporting while still feeding the daily beast

What: Think you don’t have enough time for watchdog journalism? Overwhelmed with keeping the daily machine running? Get practical advice for creating a newsroom culture that values public service and accountability reporting, no matter the staff size. The first step is deciding what not to do or to do differently. Some newsrooms are growing audiences while producing less content. The key is using data to determine which types of content are not contributing to audience engagement.

Who: Chris Coates, executive editor of the Times-Dispatch

When: 1 pm, Central

Where: Zoom

Cost: Free

Sponsor: The West Virginia Press Association

More Info

13 thoughts on the problems of teaching with AI

There is a reason why educational video games are not as engaging as regular video games. There is a reason why AI-generated educational videos will never be as engaging as regular videos. Brenda Laurel pointed to the ‘chocolate-covered broccoli’ problem over 20 years ago … her point still stands. EdSurge

While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Jenna Lyle, a spokesperson for the New York City Department of Education. Mashable 

This tech is being primarily pitched as a money-saving device—so it will be taken up by school authorities that are looking to save money. As soon as a cash-strapped administrator has decided that they’re happy to let technology drive a whole lesson, then they no longer need a highly-paid professional teacher in the room—they just need someone to trouble-shoot any glitches and keep an eye on the students. EdSurge 

Some commentators are urging teachers to introduce ChatGPT into the curriculum as early as possible (a valuable revenue stream and data source). Students, they argue, must begin to develop new skills such as prompt engineering. What these (often well-intentioned) techno-enthusiasts forget is that they have decades of writing solo under their belts. Just as drivers who turn the wheel over to flawed autopilot systems surrender their judgment to an over-hyped technology, so a future generation raised on language models could end up, in effect, never learning to drive. Public Books

Some professors have leapt out front, producing newsletters, creating explainer videos, and crowdsourcing resources and classroom policies. The one thing that academics can’t afford to do, teaching and tech experts say, is ignore what’s happening. Sooner or later, the technology will catch up with them, whether they encounter a student at the end of the semester who may have used it inappropriately, or realize that it’s shaping their discipline and their students’ futures in unstoppable ways. Chronicle of Higher Ed

(There is a) notion that college students (can) learn to write by using chatbots to generate a synthetic first draft, which they afterwards revise, overlooks the fundamentals of a complex process. Since text generators do a good job with syntax, but suffer from simplistic, derivative, or inaccurate content, requiring students to work from this shallow foundation is hardly the best way to empower their thinking, hone their technique, or even help them develop a solid grasp of an LLM’s limitations. The purpose of a college research essay is not to teach students how to fact-check and gussy up pre-digested pablum. It is to enable them to develop and substantiate their own robust propositions and truth claims. Public Books  

If a professor runs students’ work through a detector without informing them in advance, that could be an academic-integrity violation in itself.  The student could then appeal the decision on grounds of deceptive assessment, “and they would probably win.” Chronicle of Higher Ed

We are dangerously close to creating two strata of students: those whom we deem smart and insightful and deeply thoughtful, if sometimes guilty of a typo, and those who seem less engaged with the material, or less able to have serious thoughts about it. Inside Higher Ed

The challenge here is in communicating to students that AI isn’t a replacement for real thinking or critical analysis, and that heavy reliance on such platforms can lead away from genuine learning. Also, because AI platforms like ChatGPT retrieve information from multiple unknown sources, and the accuracy of the information cannot be guaranteed, students need to be wary about using the chatbot’s content. The Straits Times 

It seems futile for faculty members to spend their energies figuring out what a current version can’t do. Chronicle of Higher Ed

It is important to be aware that ChatGPT’s potential sharing of personal information with third parties may raise serious privacy concerns for your students and perhaps in particular for students from marginalized backgrounds. Barnard College

How might chatting with AI systems affect vulnerable students, including those with depression, anxiety, and other mental-health challenges? Chronicle of Higher Ed 

Students need considerable support to make sure ChatGPT promotes learning rather than getting in the way of it. Some students find it harder to move beyond the tool’s output and make it their own. “It needs to be a jumping-off point rather than a crutch.” MIT Tech Review

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection        

The Wrong People

Stop spending time with the wrong people.  Life is too short to spend time with people who suck the happiness out of you. If someone wants you in their life, they’ll make room for you. You shouldn’t have to fight for a spot. Never, ever insist yourself to someone who continuously overlooks your worth. And remember, it’s not the people that stand by your side when you’re at your best, but the ones who stand beside you when you’re at your worst that are your true friends.

Marc Chernoff, read more here.

27 thoughts on teaching with AI

Even as some educators raise concerns, others see potential for new AI technology to reduce teacher workloads or help bring teaching materials to life in new ways. EdSurge

Professors can use the new technology to encourage students to engage in a range of productive ChatGPT activities, including thinking, questioning, debating, identifying shortcomings and experimenting. Inside Higher Ed 

Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School of Business said ChatGPT has already changed his expectations of his students. “I expect them to write more and expect them to write better,” he said. “This is a force multiplier for writing. I expect them to use it.” Forbes

ChatGPT can create David, said David Chrisinger, who directs the writing program at the Harris School of Public Policy at the University of Chicago, referring to the famous Michelangelo statue. “But his head is too big and his legs are too short. Now it’s our job to interrogate the evidence and improve on what it gives us,” he said. Wall Street Journal 

For some educators, the chatbot helps to make their job easier by creating lesson plans and material for their students. Mashable

We can teach students that there is a time, place and a way to use GPT3 and other AI writing tools. It depends on the learning objectives. Inside Higher Ed

Judging from the reaction on TikTok, teachers on the app see ChatGPT as a tool to be treated the same way calculators and cell phones are used in class — as resources to help students succeed but not do the work for them. Mashable

Faculty members need time to play with new tools and explore their implications. Administrators can carve out time for faculty training support. How does bias play out in your area within the model? Inside Higher Ed

Here’s what I plan to do about chatbots in my classes: pretty much nothing. Washington Post

If a program can do a job as well as a person, then humans shouldn’t duplicate those abilities; they must surpass them. The next task for higher education, then, is to prepare graduates to make the most effective use of the new tools and to rise above and go beyond their limitations. That means pedagogies that emphasize active and experiential learning, that show students how to take advantage of these new technologies and that produce graduates who can do those things that the tools can’t. Inside Higher Ed 

Are new rubrics and assignment descriptions needed? Will you add an AI writing code of conduct to your syllabus? Divisions or departments might agree on expectations across courses. That way, students need not scramble to interpret academic misconduct across multiple courses. Inside Higher Ed

We should be telling our undergraduates that good writing isn’t just about subject-verb agreement or avoiding grammatical errors—not even good academic writing. Good writing reminds us of our humanity, the humanity of others and all the ugly, beautiful ways in which we exist in the world. Inside Higher Ed

(Some) professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract. Chronicle of Higher Ed 

After discovering my first ChatGPT essay, I decided that going forward, students can use generative A.I. on assignments, so long as they disclose how and why. I’m hoping this will lead to less banging my head against the kitchen table–and, at its best, be its own kind of lesson. Slate

There’s plenty to agree on, such as motivating students to do their own work, adapting teaching to this new reality, and fostering AI literacy. Chronicle of Higher Ed

As academe adjusts to a world with ChatGPT, faculty will need to find fresh ways to assess students’ writing. The same was true when calculators first began to appear in math classrooms, and professors adapted the exams. “Academic integrity is about being honest about the way you did your work.” Spell checkers, David Rettinger, president emeritus at the International Center for Academic Integrity, pointed out, are a prime example of artificial intelligence that may have been controversial at first, but are now used routinely without a second thought to produce papers. Chronicle of Higher Ed

For those tasked to perform tedious and formulaic writing, we don’t doubt that some version of this tool could be a boon. Perhaps ChatGPT’s most grateful academic users will not be students, but deans and department heads racking their brains for buzzwords on “excellence” while talking up the latest strategic plan. Public Books

These technologies introduce opportunities for educators to rethink assessment practices and engage students in deeper and more meaningful learning that can promote critical thinking skills. World Economic Forum

Khan Academy founder Sal Khan says the latest version of the generative AI engine makes a pretty good tutor. Axios 

Information that was once dispensed in the classroom is now everywhere: first online, then in chatbots. What educators must now do is show students not only how to find it, but what information to trust and what not to, and how to tell the difference. MIT Tech Review 

Don’t wait until you feel like an expert to discuss AI in your courses. Learn about it in class alongside your students. Chronicle of Higher Ed

The old education model in which teachers deliver information to later be condensed and repeated will not prepare our students for success in the classroom—or the jobs of tomorrow. Brookings

What if we could train it on our own rules and regulations, so if it hits an ethical issue or a problem, it could say to students: ‘you need to stop here and take that problem to the ethical lead.’ Columbia Journalism Review

I look at it as the future of: What if we could program it to be our substitute teacher at school? EdSurge

Once you start to think of a chatbot as a tool, rather than a replacement, its possibilities become very exciting. Vice

Training ourselves and our students to work with AI doesn’t require inviting AI to every conversation we have. In fact, I believe it’s essential that we don’t. Inside Higher Ed 

A US survey of 1,002 K–12 teachers and 1,000 students between 12 and 17, commissioned by the Walton Family Foundation in February, found that more than half the teachers had used ChatGPT—10% of them reported using it every day—but only a third of the students. Nearly all those who had used it (88% of teachers and 79% of students) said it had a positive impact. MIT Tech Review

For my students and for the public, the quickest way to feel hopeless in the face of seemingly unstoppable technological change is to decide that it is all-powerful and too complicated for an ordinary person to understand. Slate

Consider the tools relative to your course. What are the cognitive tasks students need to perform without AI assistance? When should students rely on AI assistance? Where can an AI aid facilitate a better outcome? Are there efficiencies in grading that can be gained? Are new rubrics and assignment descriptions needed? Will you add an AI writing code of conduct to your syllabus? Do these changes require structural shifts in timetabling, class size or number of teaching assistants? Inside Higher Ed

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                                 

22 quotes about cheating with AI & plagiarism detection     

13 thoughts on the problems of teaching with AI      

22 examples of teaching with AI

(A professor) plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.“What’s happening in class is no longer going to be, ‘Here are some questions — let’s talk about it between us human beings,’” he said, but instead “it’s like, ‘What also does this alien robot think?’” New York Times

Prof Jim is a software company that can turn existing written materials—like textbooks, Wikipedia pages or a teacher’s notes—into these animated videos at the push of a button. A teacher could use the software to turn a Wikipedia page about, say, the Grand Canyon into a video. EdSurge

Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones. New York Times

There is no understanding or intent behind AI outputs. But warning students about the mistakes that result from this lack of understanding is not enough. It’s easy to pay lip service to the notion that AI has limitations and still end up treating AI text as more reliable than it is. There’s a well-documented tendency to project onto AI; we need to work against that by helping students practice recognizing its failings. One way to do this is to model generating and critiquing outputs and then have students try on their own. Can they detect fabrications, misrepresentations, fallacies and perpetuation of harmful stereotypes? If students aren’t ready to critique ChatGPT’s output, then we shouldn’t choose it as a learning aid. Inside Higher Ed

ChatGPT could help teachers shift away from an excessive focus on final results. Getting a class to engage with AI and think critically about what it generates could make teaching feel more human “rather than asking students to write and perform like robots.” MIT Tech Review 

Reverting to analog forms of assessment, like oral exams, can put students with disabilities at a disadvantage. And outright bans on AI tools could cement a culture of distrust. “It’s going to be harder for students to learn in an environment where a teacher is trying to catch them cheating,” says Trust. “It shifts the focus from learning to just trying to get a good grade.” Wired 

I’ve given students assignments to “cheat” on their final papers with text-generating software. In doing so, most students learn—often to their surprise—as much about the limits of these technologies as their seemingly revolutionary potential. Some come away quite critical of AI, believing more firmly in their own voices. Others grow curious about how to adapt these tools for different goals or about professional or educational domains they could impact. Inside Higher Ed 

ChatGPT can play the role of a debate opponent and generate counterarguments to a student’s positions. By exposing students to an endless supply of opposing viewpoints, chatbots could help them look for weak points in their own thinking. MIT Tech Review

Assign reflection to help students understand their own thought processes and motivations for using these tools, as well as the impact AI has on their learning and writing. Inside Higher Ed

In March, Quizlet updated its app with a feature called Q-Chat, built using ChatGPT, that tailors material to each user’s needs. The app adjusts the difficulty of the questions according to how well students know the material they’re studying and how they prefer to learn. Some educators think future textbooks could be bundled with chatbots trained on their contents. Students would have a conversation with the bot about the book’s contents as well as (or instead of) reading it. The chatbot could generate personalized quizzes to coach students on topics they understand less well. MIT Tech Review

Encourage students to use peer-reviewed journals as sources. These types of journals are not available to ChatGPT, so by teaching our students about them and requiring their use in essays, we can ensure that the content being presented is truly original. The Tech Insider

Students must then take apart and improve upon the ChatGPT-generated essay—an exercise designed to teach critical analysis, the craft of precise thesis statements, and a feel for what “good writing” looks like. Wired

Show students examples of inaccuracy, bias, logical, and stylistic problems in automated outputs. We can build students’ cognitive abilities by modeling and encouraging this kind of critique. Critical AI 

Far from being just a dream machine for cheaters, many teachers now believe, ChatGPT could actually help make education better. Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more. MIT Tech Review

When possible, scaffold your assignments to promote revision and growth over time, with opportunities for feedback from peers, TAs, and/or the instructor. Build assignment pre-writing or brainstorming into class time and invite students to share and discuss these ideas in small groups or with the class as a whole. Barnard College

Nontraditional learners could get more out of tools like ChatGPT than mainstream methods. It could be an audio-visual assistant where students can freely ask as many clarifying questions as necessary without judgment.Teachers juggling countless individualized education plans could also take advantage of ChatGPT by asking how to curate lesson plans for students with disabilities or other learning requirements. New York Magazine 

Discuss students’ potentially diverse motivations for using ChatGPT or other generative AI software. Do they arise from stress about the writing and research process? Time management on big projects? Competition with other students? Experimentation and curiosity about using AI? Grade and/or other pressures and/or burnout? Invite your students to have an honest discussion about these and related questions. Cultivate an environment in your course in which students will feel comfortable approaching you if they need more direct support from you, their peers, or a campus resource to successfully complete an assignment. Barnard College

We will need to teach students to contest it. Students in every major will need to know how to challenge or defend the appropriateness of a given model for a given question. To teach them how to do that, we don’t need to hastily construct a new field called “critical AI studies.” The intellectual resources students need are already present in the history and philosophy of science courses, along with the disciplines of statistics and machine learning themselves, which are deeply self-conscious about their own epistemic procedures. Chronicle of Higher Ed

Spend some time discussing the definition (or definitions) of academic honesty and discuss your own expectations for academic honesty with your students. Be open, specific, and direct about what those expectations are. Barnard College

Experiential learning will become the norm. Everyone will need an internship. Employers will want assurances that a new graduate can follow directions, complete tasks, demonstrate judgment. Chronicle of Higher Ed

Khan Academy released the Khanmigo project which is able to help students as a virtual tutor or debating partner and helps teachers with administrative tasks such as generating lesson plans.Columbia Journalism Review

One situation in which I have found ChatGPT extremely useful is writing multiple-choice questions. It’s quite easy to write a question and the right answer, but coming up with three plausible wrong answers is tricky. I found that if I prompted ChatGPT with the following: “Write a multi-choice question about <topic of interest> with four answers, and not using ‘all of the above’ as an answer,” it came up with good wrong answers. This was incredibly helpful. Nature

ChatGPT outperformed most of his (journalism) students who were in the early part of the course. But students would have to seek out sources, do on-the-ground reporting, and find the important trends in the data. “And all of that, you’re not gonna get from ChatGPT.” Columbia Journalism Review

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments                                                               

27 thoughts on teaching with AI            

22 quotes about cheating with AI & plagiarism detection     

13 thoughts on the problems of teaching with AI