Learning to Make Good Decisions
/The fact is that kids learn to make good decisions by making decisions, not by following directions. - Alfie Kohn
The fact is that kids learn to make good decisions by making decisions, not by following directions. - Alfie Kohn
What: A look at the recent educational gag orders and Anti-DEI legislation that have become law in several states.
Who: Jacqueline Allain, Pen America; Heidi Tseu, American Council on Education; Johnny Sparks, president of the Association of Schools of Journalism and Mass Communication; Del Galloway, president of the Accrediting Council on Education in Journalism and Mass Communications; Brian Butler, dean of the College of Communication and Information Sciences at The University of Alabama
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Assoc for Education in Journalism & Mass Communication
What: Ways to inform without hurting, to advocate without re-traumatizing, and to talk to people in pain that may help them heal — versus leaving more agony in our wake.
Who: Krista Flannigan, OVC TTAC; Anastasiya Bolton, Victory Media; Coni Sanders, PFA Counseling; Adam Rhodes IRE & NICAR
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Investigative Reporters and Editors (IRE) & the Society of Professional Journalists
What: An expert panel discussion exploring the intersection between digital policy issues and the First Amendment, the free speech implications of proposals to address online problems, and how lawmakers could address these problems without infringing on users' or companies' speech rights.
Who: Ashley Johnson, Senior Policy Manager, Information Technology and Innovation Foundation; Aaron Mackey, Free Speech and Transparency Litigation Director, Electronic Frontier Foundation; Kate Ruane, Director, Free Expression Project Center for Democracy and Technology; Nicole Saad, Litigation Center Associate Director
When: 12 noon, Eastern
Where: Zoom
Cost: Free
Sponsor: Information Technology & Innovation Foundation
What: Discover how to leverage AI to transform the future of your marketing efforts. You’ll find out: How leveraging the right data can enrich your understanding of your customers; Why it’s essential to build a strong, AI-powered marketing foundation now Strategies to stay ahead in a fast-paced landscape.
Who: Ericka Podesta McCoy, CMO of Resonate.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: AdWeek
What: This course is designed for reporters interested in getting started but with minimal or no knowledge of artificial intelligence. We will begin with the basics, covering the history of AI, how the technology works, and key technical concepts such as “neural networks” and “deep learning.” We will also dissect what makes a good AI accountability story, from quick turnaround stories to more ambitious investigations, and dig deeper into a few examples. At the end of the course, those who are interested in learning more are encouraged to register for the AI reporting intensive.
Who: Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society and a contributing writer at The Atlantic.
When: 3 am, Central
Where: Zoom
Cost: Free
Sponsor: The Pulitzer Center
What: By attending this class, you’ll learn: How to identify key sources on your new beat and develop relationships with them over time; How to find the authoritative voice on a complicated beat to get exclusives and drive coverage; How to use social media to identify new stories and find sources within your beat without having a huge following.
Who: Alexa Gagosz, The Boston Globe
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: The New England Newspaper & Press Association
What: With millions of articles within the LexisNexis database, it can be easy to get lost in the database. Knowing how to customize it for your reporting purposes is key.
Who: Award-winning investigative reporter and editor Brad Hamilton
When: 11:30, Eastern
Where: Zoom
Cost: Free
Sponsor: The National Press Club’s Journalism Institute
What: This discussion will dive into the Instagram for Business interface and look at different parts of the analytics data offered and what you can do with the information.
Who: Sarah DeGeorge, a digital marketing specialist who works in paid and organic marketing, public relations, and social media marketing and management
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center, Temple University
What: Advanced social media tips and tricks, elevate your social media presence through micro strategies and activate your advocates.
Who: Kiersten Hill, Director of Nonprofit Solutions
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: FireSpring
What: Understanding both the threats and the potential benefits of AI in ensuring reliable research outcomes. Examining the interplay between technology and human resources in maintaining research integrity. Recognizing the crucial role libraries play in fostering and upholding research integrity. Discovering essential resources that aid research integrity efforts.
Who: Chris Graf, Research Integrity Director at Springer Nature
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Springer Nature
What: In this hands-on workshop on prompt writing best practices, we’ll start with a short presentation with tips, then examples and plug-play exercises on writing prompts for ChatGPT, Gemini and Claude.ai. We’ll discuss ethics, legal issues, and more along the way. We’ll cover how to write prompts that prevent hallucinations with AI tools, and how to train ChatGPT to present information in a format that you want it to.
Who: Mike Reilley Senior Lecturer, University of Illinois-Chicago
When: 2 pm, Eastern
Where: Zoom
Cost: Free for members, $25 for nonmembers
Sponsor: Online News Association
What: This course is designed for reporters interested in getting started but with minimal or no knowledge of artificial intelligence. We will begin with the basics, covering the history of AI, how the technology works, and key technical concepts such as “neural networks” and “deep learning.” We will also dissect what makes a good AI accountability story, from quick turnaround stories to more ambitious investigations, and dig deeper into a few examples. At the end of the course, those who are interested in learning more are encouraged to register for the AI reporting intensive.
Who: Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society and a contributing writer at The Atlantic; Gabriel Sean Geiger is an Amsterdam-based investigative journalist specializing in surveillance and algorithmic accountability reporting.
When: 9 am, Central
Where: Zoom
Cost: Free
Sponsor: The Pulitzer Center
The dos and don’ts of campaigning with AI – Washington Post
Nervous about falling behind the GOP, Democrats are wrestling with how to use AI — Associated Press
Deepfakes of Bollywood stars spark worries of AI meddling in India election – Reuters
AI sharpens political targeting in US presidential race – Voice of America
An A.I. Researcher Takes On Election Deepfakes – New York Times
What is propaganda? What's a deep fake? And can they influence elections? – Tennessean
In Arizona, election workers trained with deepfakes to prepare for 2024 - Washington Post
Political operative and firms behind Biden AI robocall sued for thousands - The Guardian
‘Inflection point’: AI meme wars hit India election, test social platforms – Al Jazeera
Election disinformation takes a big leap with AI being used to deceive worldwide – Associated Press
With elections looming worldwide, here’s how to identify and investigate AI audio deepfakes – Harvard’s Nieman Lab
Underdog Who Beat Biden in American Samoa Used AI in Election Campaign – Wall Street Journal
AI call quiz: see if you can spot the sham audio of Trump and Biden – The Guardian
Fake images made to show Trump with Black supporters highlight concerns around AI and elections – Associated Press
How AI-generated disinformation might impact this year’s elections and how journalists should report on it – Reuters Institute
San Francisco Chronicle AI will shake up democracy — for better or worse – SF Chronicle
FBI warns that foreign adversaries could use AI to spread disinformation about US elections - Washington Post
AI Threatens Elections by Capitalizing on Human Foibles, Officials Warn – Wall Street Journal
We must not give unconditional obedience to the voice of Eros when he speaks most like a god. The real danger seems to me not that the lovers will idolize each other but that they will idolize Eros himself. The couple whose marriage will certainly be endangered by (lapses), and possibly ruined, are those who have idolized Eros. They expected that mere feeling would do for them, and permanently, all that was necessary. When this expectation is disappointed, they throw the blame on Eros or, more usually, on their partners.
CS Lewis
The Four Loves
66% of leaders wouldn't hire someone without AI skills, report finds - ZDnet
Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry – Futurism
New AI and Large Language Model Tools for Journalists: What to Know - Global Investigative Journalism Network
AI is disrupting the local news industry. Will it unlock growth or be an existential threat? – Poynter
How Generative AI Is Helping Fact-Checkers Flag Election Disinformation, But Is Less Useful in the Global South – Global Investigative Journalism Network
AI-generated news is here from SF-based Hoodline. What will that mean? -San Francisco Chronicle
News industry divides over AI content rights - Axios
8 major newspapers join legal backlash against OpenAI, Microsoft – Washington Post
The business of news in the AI economy – Wiley Online Journal
How AI-generated disinformation might impact this year’s elections and how journalists should report on it – Reuters Institute
AI is already reshaping newsrooms, AP study finds - Poynter
AI news that’s fit to print: The New York Times’ editorial AI director on the current state of AI-powered journalism – Harvard’s Nieman Lab
Watermarks are Just One of Many Tools Needed for Effective Use of AI in News – Innovating
We’re not ready for a major shift in visual journalism - Poynter
Axios Sees A.I. Coming, and Shifts Its Strategy – New York Times
Newsweek is making generative AI a fixture in its newsroom - Harvard’s Nieman Lab
Your newsroom needs an AI ethics policy. Start here. – Poynter
Is AI about to kill what’s left of journalism? – Financial Times
Pulitzer’s AI Spotlight Series will train 1,000 journalists on AI accountability reporting – Harvard’s Nieman Lab
AI newsroom guidelines look very similar, says a researcher who studied them. He thinks this is bad news – Reuter’s Institute
AI’s Most Pressing Ethics Problem – Columbia Journalism Institute
Impact of AI on Local News Models – Local News Initiative
Love is the extremely difficult realization that someone other than oneself is real. –Iris Murdoch
Generative AI's refusal to produce ‘controversial’ content can create echo chambers – Fast Company
How I Built an AI-Powered, Self-Running Propaganda Machine for $105 – Wall Street Journal
AI can pretend to be stupider than it really is, Scientists find – Futurism
Lab reveals how AI safety features can be easily bypassed - The Guardian
New York's AI chatbot tells people to break laws and do crimes - Quartz
Why can’t anyone agree on how dangerous AI will be? – Vox
US says leading AI companies join safety consortium to address risks – Reuters
Despite the AI safety hype, a new study finds little research on the topic – Semafor
Jon Stewart On The False Promises of AI (video) – The Daily Show
Ukraine's attacks on Russian oil refineries shows the growing threat AI drones pose to energy markets – NBC Connecticut
AI deepfakes threaten to upend global elections. No one can stop them. – Washington Post
How Will Artificial Intelligence (AI) Affect Children? – Healthy Children
A National Security Insider Does the Math on the Dangers of AI – Wired
Could AI-generated content be dangerous for our health? – The Guardian
To understand the risks posed by AI, follow the money – The Conversation
Banks told to anticipate risks from using AI, machine learning – Reuters
The second most common misconception about love is the idea that dependency is love. Its effect is seem most dramatically in an individual who makes an attempt or gesture or threat to commit suicide or who becomes incapacitating depressed in response to a rejection or separation from a spouse or over.
Such a person says, “I do not want to live, I cannot live without my husband (wife, girlfriend, boyfriend), I love him (or her) so much.” And when I respond, as I frequently do, “You are mistaken; you don not love your husband (wife, girlfriend, boyfriend).” “What do you mean?” is the angry question. “I just told you I can’t live without him (or her).” I try to explain. “What you describe is parasitism, not love. When you require another individual for your survival, you are a parasite on that individual. There is no choice, no freedom involved in your relationship. It is a matter of necessity rather than love. Love is the free exercise of choice. Two people love each other only when they are quite capable of living without each other but choose to live with each other.
M. Scott Peck, The Road Less Traveled
How Meta YouTube TikTok & Others labels AI – Axios
AI Is Flooding Social Media. Here's How to Make Sure You Don't Get Lost in the Robotic Noise. – Entrepreneur
Does Generative AI Content Have a Place in Social Media? – SocialMediaToday
Meta's AI-everywhere push raises hackles - Axios
Facebook says Sorry its AI Flagged Aschwitz Museum Posts as Offensive – Futurism
More Generative AI Tools Are Coming to Social Apps — Is That a Good Thing? - SocialMediaToday
Meta debuts new AI assistant and chatbots - Axios
LinkedIn taps AI to make it easier for firms to find job candidates – Reuters
Slack’s New CEO Brings Generative AI to the Workplace Conversation - Wall Street Journal
LinkedIn expands its generative AI assistant to recruitment ads and writing profiles – Tech Crunch
Instagram Experiments With Range of Generative AI Elements - SocialMediaToday
LinkedIn Says ChatGPT-Related Job Postings Have Ballooned 21-Fold Since November – Forbes
How to Use AI Tools to Easily Make Short-Form TikTok and Reels Videos – Tech.no
TECH What happens when we train our AI on social media? – Fast Company
AI-generated images have become the latest form of social media spam – The Conversation
Meta’s AI chatbot is coming to social media. Misinformation may come with it. – Washington Post
Franck Schuurmans, a guest lecturer at the Wharton Business School at the University of Pennsylvania, has captivated audiences with explanations of why people make irrational business decisions. A simple exercise he uses in his lectures is to provide a list of 10 questions such as, “In what year was Mozart born?” The task is to select a range of possible answers so that you have 90 percent confidence that the correct answer falls in your chosen range. Mozart was born in 1756, so for example, you could narrowly select 1730 to 1770, or you could more broadly select 1600 to 1900. The range is your choice. Surprisingly, the vast majority choose correctly for no more than five of the 10 questions. Why score so poorly? Most choose too narrow bounds. The lesson is that people have an innate desire to be correct despite having no penalty for being wrong.
Gary Cokins
AI chatbots have thoroughly infiltrated scientific publishing. One percent of scientific articles published in 2023 showed signs of generative AI’s potential involvement, according to a recent analysis - Scientific American
The journey from research data generation to manuscript publication presents many opportunities where AI could, hypothetically, be used – for better or for worse. - Technology Network
Is ChatGPT corrupting peer review? There are telltale words that hint at AI use. A study of review reports identifies dozens of adjectives that could indicate text written with the help of chatbots. - Nature
Should researchers use AI to write papers? This group aims to release a set of guidelines by August, which will be updated every year - Science.org
Generative AI firms should stop ripping off publishers and instead work with them to enrich scholarship, says Oxford University Press’ David Clark. - Times Higher Ed
Here are three ways ChatGPT helps me in my academic writing. Generative AI can be a valuable aid in writing, editing and peer review – if you use it responsibly - Nature
New detection tools powered by AI have lifted the lid on what some are calling an epidemic of fraud in medical research and publishing. Last year, the number of papers retracted by research journals topped 10,000 for the first time. - DW News (video)
Estimating the prevalence of ChatGPT "contamination” in the scholarly literature: It is estimated that at least 60,000 papers (slightly over 1% of all articles) were LLM-assisted - ArXiiv
AI-Generated Texts from LLM has infiltrated the realm of scientific writing? We confirmed and quantified the widespread influence of AI-generated texts in scientific publications across many scientific domains - BioRxiv
Georgetown found that American scholarly institutions and companies are the biggest contributors to AI safety research, but it pales in comparison to the amount of overall studies into AI, raising questions about public and private sector priorities. - Semafor
Google Books is indexing low quality, AI-generated books that will turn up in search results, and could possibly impact Google Ngram viewer, an important tool used by researchers to track language use throughout history. - 404Media
The Association of Research Libraries announced a set of seven guiding principles for university librarians to follow in light of rising generative AI use. - Inside Higher Ed
The archetypal extrovert prefers actions to contemplation, risk-taking to heed-taking, certainty to doubt. He favors quick decisions, even at the risk of being wrong. She works well in teams and socializes in groups. We like to think that we value individuality, but all to often we admire one type of individual—the kind who’s comfortable “putting himself out there.” Sure, we allow technologically gifted loners who launch companies in garages to have any personality they please, but they are the exceptions, not the rule, and our tolerance extends mainly to those who get fabulously wealthy or hold the promise of doing so. Extroversion is an enormously appealing personality style, but we’ve turned it into an oppressive standard to which most of us feel we must conform.
Susan Cain, Quiet: The Power of Introverts in a World that Can't Stop Talking
Student journalists are covering their own campuses in convulsion. Here’s what they have to say - Associated Press
Campus Protests Over Gaza Spotlight the Work of Student Journalists - New York Times
As protests surge across college campuses, student journalists report from the front lines - EdSurge
“Everything Felt Really Dystopian”: Columbia Student Journalists on the Front Lines of Gaza Protests - Vanity Fair
High praise for the student journalists at Columbia University - Poynter
Student journalists discuss covering the campus protests - PBS
Pulitzer Prize Board recognizes ‘tireless efforts’ of student journalists covering college protests - The Hill
Student journalists praised for coverage on campus Gaza war protests - Axios
You’ve got briers below you and limbs above you. There's a log to step across. Then a hole to avoid. They all slow you down. Will getting past those obstacles really be worth the effort? The path of adventure and self-definition can be punctuated with periods of intense loneliness and nagging doubt. There’s no guarantee about how it all ends.
Stephen Goforth
There are limited guardrails to deter politicians and their allies from using AI to dupe voters, and enforcers are rarely a match for fakes that can spread quickly across social media or in group chats. The democratization of AI means it’s up to individuals — not regulators — to make ethical choices to stave off AI-induced election chaos. – Washington Post
Adobe surveyed more than 2,000 people in the U.S. and 63% of said they would be less likely to vote for someone who uses GenAI in their promotional content during an election. – Fast Company
Even a false-positive rate in the single digits will, at the scale of a modern social network, make tens of thousands of false accusations each day, eroding faith in the detector itself. - IEEE Spectrum
It took me two days, $105 and no expertise whatsoever to launch a fully automated, AI-generated local news site capable of publishing thousands of articles a day—with the partisan news coverage framing of my choice, nearly all rewritten without credit from legitimate news sources. I created a website specifically designed to support one political candidate against another in a real race for the U.S. Senate. And I made it all happen in a matter of hours.- Wall Street Journal
"Tools to detect AI-written content are notoriously unreliable and have resulted in what students say are false accusations of cheating and failing grades. OpenAI unveiled an AI-detection tool in Jan, but quietly scrapped it due to its “low rate of accuracy.” One of the most prominent tools to detect AI-written text, created by plagiarism detection company Turnitin.com, frequently flagged human writing as AI-generated, according to a Washington Post examination." – Washington Post
It’s important to remember that generative models shouldn’t be treated as a source of truth or factual knowledge. They surely can answer some questions correctly, but this is not what they are designed and trained for. It would be like using a racehorse to haul cargo: it’s possible, but not its intended purpose … Generative AI models are designed and trained to hallucinate, so hallucinations are a common product of any generative model … The job of a generative model is to generate data that is realistic or distributionally equivalent to the training data, yet different from actual data used for training. - InsideBigData
“No single tool is considered fully reliable yet for the general public to detect deepfake audio. A combined approach using multiple detection methods is what I will advise at this stage." Politifact
Too many educators think AI detectors are ‘a silver bullet and can help them do the difficult work of identifying possible academic misconduct.’ My favorite example of just how imperfect they can be: A detector called GPTZero claimed the US Constitution was written by AI. – Washington Post
Most deepfake audio detection providers “claim their tools are over 90% accurate at differentiating between real audio and AI-generated audio.” An NPR test of 84 clips revealed that the detection software often failed to identify AI-generated clips, or misidentified real voices as AI-generated, or both.” - NPR
In a year when billions of people worldwide are set to vote in elections, AI researcher Oren Etzioni continues to paint a bleak picture of what lies ahead. “I’m terrified. There is a very good chance we are going to see a tsunami of misinformation.” – New York Times
Google appears to have quietly struck a deal with one of the most controversial companies using AI to produce content online: AdVon Commerce, the contractor linked to Sports Illustrated's explosive AI scandal. Google is trying to have it both ways: modifying its algorithms to suppress AI sludge while actively supporting attempts to create vastly more of it. – Futurism
Most online detection tools do not provide sufficient information about their development, making it difficult to evaluate and trust the detector results and their significance. - Global Investigative Journalism Network
Run some of your other writing dated before the arrival of ChatGPT in the fall of 2022 through an AI detector, to see whether any of it gets flagged. If it does, the problem is clearly the detector, not the writing. (It’s a little aggressive, but one student told me he did the same with his instructor’s own writing to make the point.) – Washington Post
Men are quite confident (72%) in their ability to tell real news from fake news than women (59%), according to new polling from the Ipsos Consumer Tracker. We see a similar gender gap when it comes to our perceived ability to tell content that was created by AI. - Ipsos
A former high school athletic director was arrested after allegedly using AI to impersonate the school principal in a recording that included racist and antisemitic comments. The principal was temporarily removed from the school, and waves of hate-filled messages circulated on social media, while the school received numerous phone calls. – CBS News
Dubbed “model disgorgement,” AWS researchers have been experimenting with different computational methods to try and remove data that might lead to bias, toxicity, data privacy, or copyright infringement. – Semafor
There are two premises that lead Moran Cerf, a neuroscientist at Northwestern University, to believe personal company is the most important factor for long-term satisfaction.
The first is that decision-making is tiring. A great deal of research has found that humans have a limited amount of mental energy to devote to making choices. Picking our clothes, where to eat, what to eat when we get there, what music to listen to, whether it should actually be a podcast, and what to do in our free time all demand our brains to exert that energy on a daily basis.
The second premise is that humans falsely believe they are in full control of their happiness by making those choices. So long as we make the right choices, the thinking goes, we'll put ourselves on a path toward life satisfaction.
Cerf rejects that idea. The truth is, decision-making is fraught with biases that cloud our judgment. People misremember bad experiences as good, and vice versa; they let their emotions turn a rational choice into an irrational one; and they use social cues, even subconsciously, to make choices they'd otherwise avoid.
But as Cerf tells his students, that last factor can be harnessed for good.
His neuroscience research has found that when two people are in each other's company, their brain waves will begin to look nearly identical.
"This means the people you hang out with actually have an impact on your engagement with reality beyond what you can explain. And one of the effects is you become alike."
From those two premises, Cerf's conclusion is that if people want to maximize happiness and minimize stress, they should build a life that requires fewer decisions by surrounding themselves with people who embody the traits they prefer. Over time, they'll naturally pick up those desirable attitudes and behaviors. At the same time, they can avoid the mentally taxing low-level decisions that sap the energy needed for higher-stakes decisions.
Chris Weller writing in Business Insider
Imagine that you are preparing to go on a vacation to one of two islands: Moderacia (which has average weather, average beaches, average hotels, and average nightlife) or Extremia (which has beautiful weather and fantastic beaches but crummy hotels and no nightlife). The time has come to make your reservations, so which one would you choose? Most people pick Extremia.
But now imagine that you are already holding tentative reservations for both destinations and the time has come to cancel one of them before they charge your credit card. Which would you cancel? Most people choose to cancel their reservation on Extremia.
Why would people both select and reject Extremia? Because when we are selecting, we consider the positive attributes of our alternatives, and when we are rejecting, we consider the negative attributes.
Extremia has the most positive attributes and the most negative attributes, hence people tend to select it when they are looking for something to select and they reject it when they are looking for something to reject.
Of course, the logical way to select a vacation is to consider both the presence and the absence of positive and negative attributes, but that's not what most of us do.
Daniel Gilbert, Stumbling on Happiness
Meet the AI Expert Advising the White House, JPMorgan, Google and the Rest of Corporate America - Wall Street Journal
Meta Says It Plans to Spend Billions More on A.I. - New York Times
DeepMind CEO Says Google Will Spend More Than $100 Billion on AI – Bloomberg
Generative AI Is Changing the Hiring Calculus at These Companies – Wall Street Journal
Microsoft Makes a New Push Into Smaller A.I. Systems - New York Times
OpenAI prepares to fight for its life as legal troubles mount – Washington Post
Four Takeaways on the Race to Amass Data for A.I. – New York Times
AI is powering Google to a $2 trillion market cap – Quartz
Mistral, a French start-up considered a promising challenger to OpenAI and Google – New York Times
Humane releases its widely anticipated Ai Pin, a wearable badge that doubles as an AI-powered smart device – Tech Crunch
Tech Leaders Once Cried for AI Regulation. Now the Message Is ‘Slow Down’ - Wired
How Tech Giants Cut Corners to Harvest Data for A.I. – New York Times
Love seeks not only to fight for the good, but constantly to be reconciled with the ones we have had to oppose as we struggle for the good. -C. Stephen Evans
A starter guide to data structures for AI and machine learning
Understanding neuro-symbolic AI
Denoising Radar Satellite Images with Python Has Never Been So Easy
Transform Neural Networks are “revolutionizing natural language processing”
Generative AI in Content Creation for data science, data engineering, & machine learning
Vector databases in AI and LLM use cases
14 Articles about AI & the US Military
The Pentagon wants to build thousands of easily replaceable, AI-enabled drones
Embedding AI to escalate geospatial
Geospatial Data Analysis using a Python library called Geemap for creating interactive maps
Researchers have seen neural networks discover novel solutions to problems by grokking them
Six examples of AI for parsing geospatial data
AI Definitions: Small Language Models
Applying the 6 steps of the INSPIRe framework to accelerate your code generation for LLMs
Why small language models are the next big thing in AI
The Math Behind Fine-Tuning Deep Neural Networks
Large language models are capable of feigning lower intelligence than they possess
10 top use cases for vector databases that generate organizational value
‘Lavender’: The AI machine directing Israel’s bombing in Gaza
The resurgence of vector databases has led to a challenge to graph and relational approaches
The math behind neural networks
Deep dive into Sora’s diffusion transformer by hand
Here are ten algorithms that are a great introduction to machine learning for any beginner
Becoming is a service of Goforth Solutions, LLC / Copyright ©2025 All Rights Reserved