What makes you come alive?
/Don't ask yourself what the world needs. Ask yourself what makes you come alive, and go do that, because what the world needs is people who have come alive.
Don't ask yourself what the world needs. Ask yourself what makes you come alive, and go do that, because what the world needs is people who have come alive.
Behaving yourself as a child brings big rewards in adulthood. Researchers tracked more than 1,000 people from toddlerhood into their early 30s and found that the more self-control they showed as kids, the healthier, wealthier, and happier they were as grown-ups. By contrast, children who struggled to complete tasks and handle frustration without lashing out at their peers were more likely to be overweight, drug dependent, and ridden with debt as adults. The study’s authors say that self-control can be taught and nurtured with practice, and that no matter what a child’s circumstances, “good parenting can improve self-control and improve life success.”
The Week Magazine
Spend time with nice people who are smart, driven and like-minded. Relationships should help you, not hurt you. Surround yourself with people who reflect the person you want to be. Choose friends who you are proud to know, people you admire, who love and respect you – people who make your day a little brighter simply by being in it. Life is too short to spend time with people who suck the happiness out of you.
Renee Jones, read more here
Everyone is ignorant, only on different subjects.
AI chatbots were tasked to run a tech company. They built software in under seven minutes — for less than $1. – Business Insider
AI Has Already Created As Many Images As Photographers Have Taken in 150 Years. Statistics for 2023 – Every Pixel
A.I. Can’t Build a High-Rise, but It Can Speed Up the Job – New York Times
AI-generated books are infiltrating online bookstores - Axios
GenAI Is Making Data Science More Accessible - Datanami
Can AI summaries save you from endless virtual meetings? – Washington Post
Amazon is bringing a whole lot of AI to Thursday Night Football this season – The Verge
7 Projects Built with Generative AI by Data Scientists – KD Nuggets
The Novel Written about—and with—Artificial Intelligence – The Walrus
The IRS will use AI to crack down on wealthy potential tax violators - Axios
In certain circumstances, it's the poor who are more likely to cheat. The difference is that the rich do wrong to help themselves, while the poor do wrong to help others. In several experiments reported in an upcoming issue of the Journal of Personality and Social Psychology… the studies suggest a straightforward sequence: Money leads to the perception that one is higher in the social hierarchy, which in turn leads to a sense of power, which in turn leads to a greater willingness to cheat for selfish reasons.
People with less money (and therefore less power), however, are more communal. They need to rely on each other to get by, and as a result, research shows, they’re more compassionate and empathically accurate. Breaking rules is always risky, but social cohesion is paramount — so you do what it takes to help those around you.
The researchers think their findings could lead to some easy practical applications. If you’re speaking to higher-class individuals, you might want to appeal to their selfishness and warn that cheating will ultimately backfire. But when talking to those with fewer resources, you might be better off noting that their actions could harm those around them.
Matthew Hutson, New York Magazine
Scientific sleuths spot dishonest ChatGPT use in papers – Nature
AI poses risks to research integrity, universities say – Research Professional News
No, ChatGPT Can’t Be Your New Research Assistant – London School of Economics
Useful applications of AI in higher education – for which no specialist tech knowledge is needed – Times of Higher Ed
Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots – JAMA Network
How A.I. systems can accelerate scientific research – New York Times
Publishers seek protection from AI mining of academic research – Times Higher Ed
Artificial-intelligence search engines wrangle academic literature – Nature
AI can crack double blind peer review – should we still use it? – London School of Economics
Fabrication and errors in the bibliographic citations generated by ChatGPT – Nature
When we are locked into imperative thinking, we hold our absolute conviction so tightly that we have little or no recognition of our choice to say no! Obligation becomes our driving force. Relationships with other people and our responsibilities to them then become matters of dread, resentment, guilt.
Our need for a structured, orderly life can be so powerful that we refuse to make allowances for choices. To us, circumstances are either black or white. Once we settle upon a conviction or preference, we feel rigidly obligated to abide by it, with little variation.
Imperative people are almost afraid to allow for the luxury of choices. We feel the need to minimize our risks by sticking to the rules that we have made for ourselves.
Les Carter, Imperative People: Those Who Must Be in Control
Cunningham’s Law is the observation that the best way to get a good or right answer is not to ask a question; it’s to post a wrong answer. So, if you want to know the leading causes of World War I, go to a history forum and post, “World War One was entirely caused by the British.” Sit back, grab some popcorn, and wait for the angry — yet probably informed — corrections to come flying in.
Socrates, did a lot of it. Socrates would sit on some public bench and talk to whoever happened to sit next to him. He’d often open his dialogues by presenting a false or deeply flawed argument and go from there. He would ironically agree with whatever his partner would say, but then raise a seemingly innocuous question to challenge that position.
“Socratic irony” is where you pretend to be ignorant of something so you can get greater clarity about it. In short, it’s a lot like Cunningham’s Law.
Here are two ways you can use Cunningham’s Law:
The Bad Option: Have you ever been in a group where no one can decide what decision to make, and so you hover about in an awkward, polite limbo? “What restaurant shall we go to?” gets met with total silence. Instead try saying, “Let’s go to McDonald’s” and see how others object and go on to offer other ideas.
The Coin Toss: If you’re unsure about any life decision — like “should I read this book or that book next?” or “Should I leave my job or not?” — do a coin toss. Heads you do X, tails you do Y. You are not actually going to live by the coin’s decision, but you need to make a note of your reaction to whatever outcome came of it. Were you upset at what it landed on? Are you secretly relieved? It’s a good way to elicit your true thoughts on a topic.
Jonny Thomson writing in BigThink
If AI becomes conscious: here’s how researchers will know - Nature
AI & Internet’s Existential Crisis - OM
Large language models aren’t people. Let’s stop testing them as if they were. - MIT Tech Review
Author Talks: In the ‘age of AI,’ what does it mean to be smart? - McKinsey
Why humans will never understand AI - BBC
Does an AI poet actually have a soul? - Washington Post
Is AI Eroding Our Ability To Think? - ForbesThe future of accelerating intelligence - The Kurzweil Library
M.F.A. vs. GPT How to push the art of writing out of a computer’s reach - The Atlantic
What Stephen King — and nearly everyone else — gets wrong about AI and the Luddites - LA Times
How the AI Revolution Will Reshape the World - TIME
Be intentional about learning what the other person wants to communicate and respond to their feelings.
Listen to what they’re telling you and suppress the urge to fix the issue, problem solve, or change the way they are feeling about the situation.
Put your own feelings aside to create a space where another person can speak his or her mind—which requires staying calm.
Suspending judgment and simply taking in what is being said can go a long way towards helping someone feel heard or diffusing an argument.
Show that you are actively listening and are truly understanding what the other person is saying by mirroring back what someone has said. Include phrases like ‘it sounds like’ or ‘it seems like.’
Take the time for silence in a discussion, showing that you’re processing what is being talked about and giving it the space that it needs to sink in properly.
Edited from Jeremy Brown writing in Fatherly
Resentment is like drinking poison and waiting for the other person to die. - Saint Augustine
AI Startup Buzz Is Facing a Reality Check – Wall Street Journal
Nearly 20% of the world's top 1,000 websites are blocking crawler bots that gather data for AI services – Originality.AI
Prediction: AI will add $4.4 trillion to the global economy annually – New York Times
Behind the AI boom, an army of overseas workers in ‘digital sweatshops’ – Washington Post
How ChatGPT Kicked Off an A.I. Arms Race – New York Times
How ChatGPT became the next big thing - Axios
OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic - TIME
The state of AI in 2023: Generative AI’s breakout year – McKinsey
Microsoft confirms it’s investing billions in the creator of ChatGPT - CNN
What to know about OpenAI, the company behind ChatGPT - Washington Post
AI is entering an era of corporate control - The Verge
Inside Meta's scramble to catch up on AI - Reuters
Immigrants play outsize role in the AI game - Axios
Apple Is an AI Company Now - The Atlantic
Websites That Have Blocked OpenAI’s GPTBot CCBot Anthropic, a 1000 Website Study - Originality. ai
What OpenAI Really Wants - Wired
Rise above the drama.
Ideogram is an excellent AI image generator that can also create text. While MidJourney produces higher quality images, this one is easier to use for beginners. It has a simple to use interface and is free.
If you are having trouble getting motivated to finish a project, consider the possibility that finishing that report (or whatever your project involves) means facing a void. The project is a distraction so that you don't have to see the emptiness outside of it. You slow down the completion until another project emerges to play the role of another distraction. You’re putting off looking at uncomfortable truths about yourself
While in the midst of a deadline-driven project, you feel like you have a clear identity because your purpose is defined by the project's needs. But if the projects was removed from your life, would you have justification for thinking of yourself as someone of value? Is your worth bound in the projects?
So it is with serious relationships, where someone provides a sense of purpose, giving definition and a sense of worth.
If you were forced to sit down and write out the definition of who you are without the benefit of a title (manager, employee, project manager) or relationship (wife, girlfriend, mother) would you lack the means to define yourself?
A suggestion: Spend time doing things that allow you to center yourself. Give yourself downtime to listen. Whatever brings you to stillness will put you in a good position to allow the transition to take hold and internalize it so you don’t miss the opportunity to make a paradigm shift toward greater emotional and spiritual health. Allow yourself to just "be" and reconnect with the world around you (its sounds, smells, tastes, touches, and sights).
Stephen Goforth
The first duty of love is to listen- Paul Tillich
You cannot fully unleash your genius in the three-minute increments you have between distractions. Unfortunately, for many of us distraction has become a habit — one that has been so often and routinely reinforced that it is extremely difficult to break. Persuasive technology — technology that uses sophisticated techniques from behavioral psychology to “persuade” us to keep engaging with it — exacerbates the problem. So, over time, as our habit gains strength, we go looking for distraction. When things get quiet, or a task gets boring or frustrating, we reach for our phones.
Maura Thomas writing in the Harvard Business Review
Find people who are conduits through which you can better understand yourself and your experiences ... as you play the same role for them. - Stephen Goforth
Algorithms – Direct, specific instructions for computers created by a human through coding that tells the computer how to perform a task.
The code follows the algorithmic logic of “if”, “then”, and “else.” An example of an algorithm would be:
IF the customer orders size 13 shoes,
THEN display the message ‘Sold out, Sasquatch!’;
ELSE ask for a color preference.
Besides rule-based algorithms, there are machine-learning algorithms used to create AI. In this case, the data and goal is given to the algorithm, which works out for itself how to reach the goal.
There is a popular perception that algorithms provide a more objective, more complete view of reality, but they often will simply reinforce existing inequities, reflecting the bias of creators and the materials used to train them.
Artificial Intelligence (AI) – Basically, AI means “making machines intelligent”, so they can make some decisions on their own without the need for any human interference.
The phrase was coined in a research proposal written in 1956. The current excitement about the field was kick-started in 2012 by an online contest called the ImageNet Challenge, in which the goal was getting computers to recognize and label images automatically.
Big Data – This is data that’s too big to fit on a single server.
Typically, it is unstructured and fast-moving. In contrast, small data fits on a single server, is already in structured form (rows and columns), and changes relatively infrequently. If you are working in Excel, you are doing small data. Two NASA researchers (Michael Cox and David Ellsworth) first wrote in a 1997 paper that when there’s too much information to fit into memory or local hard disks, “We call this the problem of big data.”
Generative AI – Artificial intelligence that can produce content (text, images, audio, video, etc.) such as ChatGPT.
It operates similarly to the “type ahead” feature on smartphones that makes next-word suggestions. Gen AI is based on the particular content it was trained on (exposed to).
GPT – The “GPT” in ChatGPT stands for Generative Pre-Trained Transformer.
Hallucinations – when an LLM provides responses that are inaccurate responses or not based on facts.
Hallucination – the AI saying things that sound plausible and authoritative but simply aren’t so.
Large Language Models (LLMs) – AI trained on billions of language uses, images and other data. It can predict the next word or pixel in a pattern based on the user’s request. ChatGPT and Google Bard are LLMs.
The kinds of text LLMs can parse out:
Grammar and language structure.
How a word is used in language (noun, verb, etc.).
Word meaning and context (ex: The word green may mean a color when it is closely related to a word like “paint,” “art,” or “grass.”
Proper names (Microsoft, Bill Clinton, Shakira, Cincinnati).
Emotions (indications of frustration, infatuation, positive or negative feelings, or types of humor).
Machine learning (ML) – AI that spots patterns and improves on its own.
An example would be algorithms recommending ads for users, which become more tailored the longer it observes the users‘ habits (someone’s clicks, likes, time spent, etc.).
Data scientists use ML to make predictions by combining ML with other disciplines (like big data analytics and cloud computing) to solve real-world problems. However, while this process can uncover correlations between data, it doesn’t reveal causation. It is also important to note that the results provide probabilities, not absolutes.
Neural Network – In this type of machine learning computers learn a task by analyzing training examples. It is modeled loosely on the human brain—the interwoven tangle of neurons that process data in humans and find complex associations.
Neural networks were first proposed in 1944 by two University of Chicago researchers (Warren McCullough and Walter Pitts) who moved to MIT in 1952 as founding members of what’s sometimes referred to as the first cognitive science department. Neural nets were a major area of research in both neuroscience and computer science until 1969. The technique then enjoyed a resurgence in the 1980s, fell into disfavor in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.
Open Source AI – When the source code of an AI is available to the public, it can be used, modified, and improved by anyone. Closed AI means access to the code is tightly controlled by the company that produced it.
The closed model gives users greater certainty as to what they are getting, but open source allows for more innovation. Open-source AI would include Stable Diffusion, Hugging Face, and Llama (created by Meta). Closed Source AI would include ChatGPT and Google’s Bard.
Prompts – Instructions for an AI. It is the main way to steer the AI in a particular direction, indicate intent, and offer context. It can be time-consuming if the task is complex.
Prompt Engineer – An advanced user of AI models, a prompt engineer doesn’t possess special technical skills but is able to give clear instructions so the AI returns results that most closely match expectations.
This skill can be compared to a psychologist who is working with a client who needs help expressing what they know.
Red Teaming – Testing an AI by trying to force it to act in unintended or undesirable ways, thus uncovering potential harms.
The term comes from a military practice of taking on the role of an attacker to devise strategies.
While some of these definitions are a bit of an oversimplification, they will point the beginner in the right direction. -Stephen Goforth
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved