Judging Ourselves
/We judge ourselves by our intentions and others by their behavior. -Stephen Covey
We judge ourselves by our intentions and others by their behavior. -Stephen Covey
"Imagine AI as a force multiplier. It will genuinely improve the writing quality and speed of a mediocre student. But that writing will still be below average because the mediocre student cannot recognise the gap between what AI gave them and what excellent output actually looks like. AI amplifies what you bring to it." - Times Higher Ed
1. Learn about generative AI dangers related to biases, privacy concerns, and the impact of AI on vulnerable students. Consider how chatting with AI systems affect vulnerable students, including those with depression, anxiety, and other mental health challenges. (handout: Dangers of AI)
2. Experiment with AI to see if it can enhance your teaching methods and plans. Consider how AI could be ethically used in education and where you draw the line in how you use it to do your work.
3. Talk with students about your expectations regarding the use of generative AI in class. College faculty should include a syllabus statement offering clear guidance regarding expectations for the use of generative AI in the classroom and have open and frank discussions with students about the expectations regarding its use. (handout: AI use cases)
4. Explain to students what counts as AI-enabled plagiarism and when its use is appropriate, especially considering that it is being integrated into many commonly used tools (meaning a blanket ban on its use is nearly impossible). Consider that the answer to this question will change depending on the assignment, the subject, and the learning outcomes.
5. Avoid depending on AI detectors due to their limitations (false positives and legal issues). Rather than focusing on catching cheaters, faculty should focus on developing new pedagogy to address the evolving technology (similar to the rise of the internet).
6. Get students to wrestle with it along with you.
7. Help students learn to fact check AI-generated writing outputs. They need a healthy skepticism.
8. Talk about AI transparency, providing examples.
9. Develop pedagogical options for controlling the use of AI: Pen & paper, Blue Books, oral exams, in-class presentations, the use of Google Docs or other writing tools that track writing history, personalization, concept-mapping, scaffolding assignments, etc. Decide what are the cognitive tasks that students need to perform without AI assistance.
10. Develop new rubrics and assignment descriptions taking generative AI into account. Some assignments should be AI-free by design. Others should actively engage AI, teaching students to evaluate, direct and improve its outputs.
11. Learn AI & double down on what makes you human. It’s never all one-sided. Avoid extreme positions of all-in or all-out Go down both roads. Learn how to use it skeptically—what it can do and its limits, knowing this an ongoing chore. Double down on what makes you human. Focus resources on the other side of the equation—that is, helping students set themselves apart from simply being good at using AI to developing the skills that will become rare and valuable because of AI limitations (including communication, creativity, and flexibility). Help students develop a healthy and ethical use of generative AI as you do this yourself.
12. Prepare students for their careers. They will enter a world where AI usage is expected. Keep in mind that this expectation is that AI will allow employees to do more work faster.
Never forget that only dead fish swim with the stream -Malcolm Muggeridge
As a society, we need to broadly recognize LLMs as intellectual engines without drivers, which unlocks their true potential as digital tools. When you stop seeing an LLM as a “person” that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine’s processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator’s view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda. -Benj Edwards writing in ArsTechnica
Causal AI – The application of causal inference principles to AI to uncover connections between data points. The goal is to find cause-and-effect relationships. Causal AI uses methods like A/B testing to gauge the impact of changes in user behavior by manipulating specific factors. The result is more precise insights for decision-making, especially when real-time forecasting is needed. In contrast, predictive AI is focused on finding patterns, considering, for instance, users' preferences based on past behavior and user characteristics. Predictive AI finds correlations and trends, but it doesn’t get at the “why” of results.
The medical AI revolution requires rethinking health care’s architecture – Stat
Should you really trust health advice from an AI chatbot? – BBC
AI Startup Has Helped Reverse Thousands of Denied Health Insurance Claims - Bloomberg
The Algorithm Will See You Now Viz.ai saves critical time in stroke care and helps catch other diseases earlier. – Wall Street Journal
Dozens of AI disease-prediction models were trained on dubious data – Nature
An ‘AI doctor’? An experiment in Utah raises urgent questions. – Washington Pos
The ChatGPT Symptom Spiral Be careful asking chatbots about your health. – The Atlanti
Doctors Couldn’t Help Them. They Rolled the Dice With A.I. – New York Times
Why so many Americans are using AI for health guidance – PBS
How to create “humble” AI – MIT
Health AI and the law: Could your chatbot doc testify against you? - Mashable
An Amish Avatar and an A.I. Monk Are Pitching Supplements on Social Media - New York Times
In 5 Doctors Now Use AI In Their Practices, AMA Survey Says – Forbes
Microsoft’s New AI Health Tool Can Read Your Medical Records and Give Advice – Wall Street Journal
Making a 'digital twin' of yourself could revolutionize future surgeries, making medical procedures much more personal – Live Science
I’m a doctor. Here’s what opened my mind about the future of medical care. - Washington Post
AI's big biosecurity blind spot - Axios
How doctors use AI scribes to cut paperwork and focus on patients – Scientific American
Deepfake X-rays are so real even doctors can’t tell the difference – Science Daily
How AI is transforming health care and what it means for the future – CBS News
A.I. Chatbots Want Your Health Records. Tread Carefully. – New York Times
The AI push in health care is deepening medicine’s trust crisis – Stat
AI ethics in Catholic health – Boston College
Doctors Couldn’t Help Them. They Rolled the Dice With A.I. – New York Times
A “fixed mindset” assumes that our character, intelligence, and creative ability are static givens which we can’t change in any meaningful way, and success is the affirmation of that inherent intelligence, an assessment of how those givens measure up against an equally fixed standard; striving for success and avoiding failure at all costs become a way of maintaining the sense of being smart or skilled.
A “growth mindset,” on the other hand, thrives on challenge and sees failure not as evidence of unintelligence but as a heartening springboard for growth and for stretching our existing abilities. Out of these two mindsets, which we manifest from a very early age, springs a great deal of our behavior, our relationship with success and failure in both professional and personal contexts, and ultimately our capacity for happiness.
The “growth mindset” creates a passion for learning rather than a hunger for approval. Its hallmark is the conviction that human qualities like intelligence and creativity, and even relational capacities like love and friendship, can be cultivated through effort and deliberate practice. Not only are people with this mindset not discouraged by failure, but they don’t actually see themselves as failing in those situations — they see themselves as learning.
Maria Popova writing in BrainPickings
A data scientist at a software company said he and his co-workers used to have to write code for every new feature. Now they just come up with the idea and the A.I. writes the code and runs the analysis. His company’s interview process, which was once dominated by questions about coding and rewarded socially awkward nerds, now focuses on whether job candidates can identify good ideas and seem capable of persuading colleagues to back them, he said. -New York Times
Algorithms - Direct, specific instructions for computers created by a human through coding that tell the computer how to perform a task. Like a cooking recipe, this set of rules has a finite number of steps. More specifically, it is code that follows the algorithmic logic of “if”, “then”, and “else.” An example of an algorithm would be: IF the customer orders size 13 shoes, THEN display the message ‘Sold out, Sasquatch!’; ELSE ask for a color preference.
My life would be complete if, before I die, I…
The overlooked way AI could speed hiring and support workers - Washington Post
How ‘Jagged Intelligence’ Can Reframe the A.I. Debate – New York Times
What "Jagged Intelligence" Could Mean for STEM Careers - Techoly
That Meeting You Hate May Keep A.I. From Stealing Your Job – New York Times
New AI jobs risk paper posits less doom and gloom - Axios
ProPublica journalists walk off the job in first U.S. newsroom strike over AI – Harvard’s Nieman Lab
The Workers Opting to Retire Instead of Taking On AI – Wall Street Journal
MIT study challenges AI job apocalypse narrative – Axios
Take my job, AI! - Jeff Zych
What to do if your employer is requiring you to use AI – Fast Company
Women are getting less recognition than men for using AI - Axios
How AI Damages Work Relationships—and Where It Can Actually Help – Harvard Business Review
Why Gen Z wants more office work - Axios
New AI tool predicts cancer spread with surprising accuracy – Science Daily
Why You Should Stop Worrying About AI Taking Data Science Jobs – Toward Data Science
The AI employment dilemma that impacts every worker – Axios (video)
Imagine Losing Your Job to the Mere Possibility of AI - The Atlantic
Jobs least and most vulnerable to AI – Washington Post
This is the fastest-growing job for young workers, LinkedIn says – CBDS News
AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet – 404 Media
Generative AI changes how employees spend their time – MiT
Job Cuts Driven by A.I. Are Rising on Wall Street - New York Times
AI Washing - This references a company’s misleading claims about its use of AI. It’s a marketing tactic that exaggerates the amount of AI technology used in their products to appear more advanced than they actually are. AI washing takes its name from greenwashing, where companies make false or misleading claims about the positive impact they have on the environment. The SEC has leveled fraud charges against companies for misleading investors about their use of AI.
The risk of skills atrophy is very real. People of my generation who had to learn to do things the hard way are benefiting the most from these tools. If you’re a grad student now and you’re trying to decide whether to read your data-methods textbook or just ask ChatGPT to run this regression for you, that’s a very tempting thing. - Alexander Kustov, a political scientist at the University of Notre Dame in the Chronicle of Higher Ed
One of the most memorable scenes in the movie Jerry Maguire climaxes with the main character telling his estranged wife, “You complete me.” Many people understand the line to mean "I'm not a whole person without you." As if a person is like a machine missing a critical part until the "right one' comes along. But you could also hear it as a statement of realization that "I finally see how we fit together." Like pieces of a jigsaw puzzle. Or better yet, like two great works of art. The paintings, sculptures or rugs are beautiful on their own, yet woven together they create a new, compelling and intricate tapestry of vibrant colors.
Stephen Goforth
Where Does Publishing’s A.I. Problem Leave Authors and Readers? – New York Times
Dozens of AI disease-prediction models were trained on dubious data – Nature
Frontiers issues AI guidance spanning full publishing lifecycle – Research Information
Tackle ‘AI slop’ in education research ‘or lose teacher trust’ – Times Higher Ed
Plagiarised research passed automated tests, and I detected it – but only because it copied my work – Conversation
If a Large Language Model can replicate your scientific contribution, the problem is not the LLM – Nature
Bloodhound code sniffs out copied-and-pasted numerical data – Retraction Watch
Scientists Invented a Fake Disease Caused by Blue Light—Now It's in Medical Papers - Inc
AI Is a Better Researcher Than You That claim got a political scientist denounced. Is it true? – Chronicle of Higher Ed
Cite unseen: when AI hallucinates scientific articles- Science.org
Hallucinated citations are polluting the scientific literature. What can be done? - Nature
Anonymisation in research must be overhauled for AI era – Research Professional News
What is p hacking, is it bad, and can you get AI to do it for you? – Towards Data Science
Policies Permitting LLM Use for Polishing Peer Reviews Are Currently Not Enforceable – ArXiv
A citation alert led researchers to a network of fake articles. But who is benefiting? – Retraction Watch
More AI will not beat the Red Queen - Wonkhe
STM Plants a Flag About Responsible Use of Research Content in GenAI – Scholarly Kitchen
Prompt injection in manuscripts: exploiting loopholes or crossing ethical lines? – Springer
Seeing Is Believing? Scientific Misconduct and the Detection of Problematic Images – International Anesthesia Research Society
How to build an AI scientist: first peer-reviewed paper spills the secrets - Nature
Major conference catches illicit AI use — and rejects hundreds of papers - Nature
An AI-authored paper just passed peer review. The scientific community isn’t ready – Scientific American
Wikipedia Bans AI-Generated Content – 404 Media
The European Research Council sets out firm line on use of AI in peer review – Research Professional News
AI models fail to accurately pick out which social science studies could be replicated - OSF
Restoring Trust in Science: Storytelling, AI, and Integrity in Scholarly Publishing – ISMPP (webinar recording)
Temperature - A setting within some generative AI models that determines the randomness of the output. Temperature helps balance the model’s outputs between predictability and creativity. The higher the temperature, more creativity is produced along with more randomness and hallucinations. The lower the setting, the more predictability—but with less creativity.
As A.I. makes the production of knowledge work more and more efficient, the job of presenting, debating, lobbying, arm-twisting, reassuring or just plain selling the work appears to be rising in importance. And the need for those sometimes messy human tasks may limit the number of people A.I. displaces. -New York Times
My life is my message – Ghandi
What: In this session, nonprofit leaders will explore essential accessibility concepts, common website challenges, and clear strategies to improve usability for people with disabilities. Walk away with practical guidance you can apply immediately to strengthen your online presence, increase engagement, and deepen your impact.
Who: Erin Mastrantonio, Elevation Web.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Nonprofit Learning
What: We will explore the common patterns appearing across AI literacy guidebooks from districts across the country. Instead of focusing on individual districts, this session curates the best ideas that are rising to the top and highlights practical approaches that schools are using to build responsible, confident AI use.
Who: Matthew Winters, Artificial Intelligence Educational Specialist, Utah State Board of Education); Jennifer Ehehalt, Former Educator, Current Senior Regional Manager, Common Sense Education; Sue Thotz, Former Educator, Current Director, Education Outreach, Common Sense Media.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Common Sense Education
What: We will discuss how using AI to analyze large collections of data can shed light on the efficacy of professional learning.
Who: Lisa Schmucki, Founder and CEO of edWeb.net; Thor Prichard, President and CEO of Clarity Innovations.
When: 4 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: EdWeb.net
What: A session for students, researchers, faculty, and staff who want to use Codex to take action in their daily routines and workflows. It is designed for the entire campus community—not just developers or technical users—making it accessible across roles and levels of technical experience. This session introduces Codex from the perspective of practical use, showing how it can support productivity, creativity, and reducing administrative burden across campus.
Who: Gaurav Kaila, AI Deployment Manager, OpenAI; Shaig Abduragimov, Solutions Engineering Education, OpenAI.
When: 5 am, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: A look at what makes a freelance or senior-level CV stand out, whether you are pitching for commissions, applying for contracts, or positioning yourself for consultancy and leadership opportunities.
Who: ITN recruitment consultant Dan Sado.
When: 7:30 am, Eastern
Where: Zoom
Cost: members, £10; nonmembers, £15
Sponsor: Women in Journalism
What: In this session, we’ll talk through what pipeline modernization actually looks like in practice. We'll cover when CDC is the right move versus when it's overkill, how to approach hybrid environments where legacy and cloud systems need to coexist, and what separates teams that modernize incrementally from those that get stuck in planning mode
Who: Kim Fessel, Jess Ramos of Big Data Energy; Manish Patel, GM of Data Integration at CData.
When: 9 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Towards Data Science
What: Join us for an engaging, forward-looking session exploring the key digital marketing trends in 2026. Learn what’s next in content marketing, search, AI-driven personalization, and automation so you can refine your strategy and stay ahead of the competition.
Who: Join Digital Marketing Strategist Ray Sidney-Smith
When: 10 am, Eastern
Where: Zoom
Cost: $45
Sponsor: Small Business Development Center, Duquesne University
What: We will share what they’ve learned about going all-in on AI and give a practical demonstration of what that looks like from briefing to brand consistency.
Who: Phillip Maggs, Director of AI Product, Superside; Juliana Paba, Senior Project Manager, Superside.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Superside
What: Join OpenAI Academy for an introductory session on Codex, designed for anyone curious about building with AI—no technical experience required. We’ll start with a quick overview of what Codex is, key definitions, and how it works, before moving into live demonstrations of what you can create as a nontechnical user.
Who: Aaron Wilkowitz, Solutions Engineer, OpenAI.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: This workshop will assist you in understanding the fundamentals of employment discrimination law applicable to news organizations and journalistic endeavors, including differences between employees and contractors; discrimination in the context of hiring, discipline, and termination; different forms that discrimination takes; and responses to incidents of discrimination. The session also explores harassment, a close legal cousin of discrimination.
Who: Anaeli Petisco-Rojas, Vice President, Employment Law, TelevisalUnivision; Jamila Brinson, Partner, Labor and Employment, Jackson Walker LLP.
When: 6 pm, Eastern
Where: Zoom
Cost: Free to members
Sponsor: National Association of Hispanic Journalists.
What: You’ll explore where AI tools can genuinely help with analysis, extraction and transformation and how to find a treasure trove of stories buried in datasets. The focus is on practical newsroom tips and maintaining editorial oversight while working more efficiently.
Who: Paul Bradshaw, Data Journalist, BBC.
When: 7:30 am, Eastern
Where: Zoom
Cost: members, £15; nonmembers, £25
Sponsor: Women in Journalism
What: How practical, everyday journalism—housing guides, school updates, local government coverage that people can use—has become a direct driver of reader revenue, stronger habits, and higher advertiser relevance.
Who: Jeff Elgie, CEO, Village Media, Canada.
When: 9 am, Eastern
Where: Zoom
Cost: Free
Sponsor: World Association of News Publishers
What: We'll cover: An overview of AI and ChatGPTs Best practices for writing good prompts; Demos of content creation, data analysis, and image generation; How to discover use cases of ChatGPT at work.
Who: Juliann Igo, GTM, OpenAI.
When: 9:45 am, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: Industry experts will share how learning organizations are using AI to optimize, automate and enhance content development workflows. Sessions will explore best practices for navigating multiple AI tools while maintaining consistency, quality and alignment with learning goals.
Who: Thomas Magnifico, VP of Strategic Partnerships, D-ID; Danny Pichardo, Director of Customer Success – US, D-ID.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Industry
What: Join us to get a firsthand look at how Adobe Learning Manager brings AI to every stage of the learning journey - including personalized recommendations, deep semantic search, conversational AI Assistants, and AI‑driven coaching for role‑based practice. We’ll also share a practical, forward‑looking view of how generative AI will influence the next generation of learning design.
Who: Justin Justin Seeley Learning Evangelist, Adobe.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Adobe Learning Management
What: This webinar, designed for reporters covering science either occasionally or full-time—teaches basic principles about recognizing science worth reporting on and doing it justice in your coverage.
Who: Freelance science reporter Elena Renken; Ph.D. neuroscientist Dr. Tori Espensen.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: SciLine
What: Next step for growing your skills. Join the OpenAI team to learn how to conduct deep research for report writing, organize your work with Projects, and build custom GPTs to automate tasks. What you will learn: How to leverage deep research to generate reports How to create Projects in ChatGPT; An overview of GPTs and best practices for building them.
Who: Juliann Igo, GTM, OpenAI.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: This workshop will introduce participants to a framework of “Community News Roles” developed by the Journalism + Design Lab, which reframes journalism as a set of actions — such as documenting, sensemaking, facilitating, and navigating — that people fulfill every day to contribute to the flow of local news.
Who: Cole Goins is the Managing Director of the Journalism + Design Lab; Megan Lucero is the Network Lead for the Journalism + Design Lab.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
What: This webinar brings together leaders from The Post & Courier, The Miami Herald, and Illinois Answers Project/Better Government Association to share practical, real-world strategies for building event-based philanthropic funding from the ground up.
Who: Claire Linney, VP of Development, The Post & Courier; Jane Wooldridge, formerly Senior Director for Journalism Sustainability and Partnerships, The Miami Herald; Amber Bel’cher, VP of Development, Better Government Association
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Local Media Association Lab for Journalism Funding
What: We will explore how to turn archives into structured, machine-readable datasets, unlocking entirely new value in AI markets. Key takeaways: Structured content is the real asset; Archives hold untapped value; Quality and provenance matter more than volume; The shift from scraping to licensing is redefining publisher leverage.
Who: Brooke Hartley Moy, CEO and Founder, Infactory; Mary Liz McCurdy, SVP of Strategic Partnerships and Business Development, The Atlantic; Ezra Eeman, Lead, AI in Media, WAN-IFRA; Kevin Anderson, Director of the Digital Revenue Network, WAN-IFRA.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: World Association of News Publishers
What: We will challenge two of the industry’s most persistent assumptions: that publishers need entirely new products to reach young audiences, and that younger consumers simply will not pay for news and journalism. Join this session for an inside look at what Podme learned while building a subscription audience — and what those lessons reveal about how to create journalism that younger audiences see as worth paying for.
Who: Kristin Ward Heimdal, Managing Director/Editor-in-Chief, Podme.
When: 10 am, Eastern
Where: Zoom
Cost: Free to members
Sponsor: International News Media Association
What: We will explores the trends that are already changing how people find jobs and how jobs find people, stretches them forward, and asks what they might mean for someone building a career in research, industry, or both. Expect honest speculation, practical takeaways, and a few uncomfortable questions about how you present yourself in a world that's increasingly automated.
Who: Erik Fors-Andrée, VP and founder of Go Monday, one of Sweden's largest suppliers of counseling on working life and career.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Karolinska Institute
What: What journalists and their audiences need to know, including: changes in voter list maintenance and what they mean for election coverage; trends among the elections workforce; how reporters can understand and analyze the data that comes in rapidly on Election Day; questions reporters can ask before election results come in; and how to cover major policy proposals in a way that cuts through the noise.
Who: Wren Orey, director of the Bipartisan Policy Center’s Elections Project.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: National Press Club Journalism Institute; Bipartisan Policy Center
What: This webinar is about the new Future of Work Reporting Fellowship to support journalists telling essential stories: how education, workforce development, and the innovation economy intersect in real communities, and what it means for people's lives and livelihoods. Join this launch webinar to learn more about how to apply for the opportunity and support the fellowship.
Who: Elyse Ashburn, Co-Founder & Editor, Work Shift; Paul Fain, Co-Founder & Editor, Work Shift; Shalin Jyotishi, Founder & Director, Future of Work & Innovation Economy initiative, New America; Carol Rava, Vice President for Education Philanthropy, Ascendium Education Group.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New America
What: You’ll Learn: Why investment in brand channels and creator partnerships has grown exponentially; Best practices from top brand marketers and creator experts across all three engagement areas; How to unify every YouTube touchpoint so your audience receives one clear, compelling message.
Who: Lauren Bane, Senior Digital Marketing Manager, Brand Impact Patagonia; Jamie Gutfreund, Jamie Gutfreund, Founder, Creator Vision; Matt Duffy, CMO Pixability.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Pixability
What: We’ll move past the AI hype to show you exactly how smart automation can act as a “digital coordinator” for your team. We will show you how organizations much like yours are growing their volunteer base, delivering more services and doing it all without asking too much from their dedicated and loyal team.
Who: Jim Schwab, Volunteer Systems Consultant at Rosterfy.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: CharityVilliage
What: We will explore the essential skills editors need to lead in today’s faster, more complex and more visible newsroom environment. As editors take on expanded roles as strategists, coaches, technologists and guardians of public trust, success more than ever depends on clear priorities, strong audience awareness and sound judgment under pressure. This toolkit provides practical frameworks for understanding audiences, coaching reporters, making smart editorial decisions, using metrics and AI responsibly, and building sustainable newsroom systems.
Who: Allison Petty, director, local news, Lee Enterprises; Chris Coates, senior director, local news, Lee Enterprises.
When: 2 pm, Eastern
Where: Zoom
Cost: $35
Sponsor: Online Media Campus
What: We will share how AI-powered role-plays, real-time coaching, and feedback are reshaping how organizations develop managers. We’ll explore why this is such a pivotal moment for leadership development, how AI coaching drives consistency and supports managers in the flow of work, and walk through a live demonstration of Tenor's voice AI coaching for performance development, followed by Q&A.
Who: Tenor Co-Founder James Cross.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Tenor
What: Please bring your puzzling and perplexing copyright questions.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Association of Southeastern Research Libraries
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved