Managing Yourself

If you understand how you think and work, you have more control over who you will become. Abilities can improve as you understand how your mind works.

Creative and critically thinking people open a conversation with themselves that allows them to understand, control, and improve their own minds and work.

Ken Bain, What the Best College Students Do

The Potential of AI using Liquid Neural Networks

Large language models like ChatGPT and Dall-E have billions of parameters, and each improved model increases in size and complexity. Researchers at an MIT lab believe artificial intelligence can make a leap forward by going smaller. Their experiments show liquid neural networks beat other systems when navigating in unknown environments. “Liquid neural networks could generalize to scenarios that they had never seen, without any fine-tuning, and could perform this task seamlessly and reliably.” They also open the proverbial black box of the system’s decision-making process, which could help to root out bias and other undesirable elements in an AI model. The results have immediate implications for robotics, navigation systems, smart mobility, and beyond toward predicting financial and medical events. Read more here.

When resting means death

Two climbers died in a weekend snowstorm on Mount Rainier. The men carried warm clothes, sleeping bags, tents, and other items. They had everything they needed to save their lives. But instead of using what they had brought with them to survive, they first sat down to rest—where they died of exposure.

The climb can be tough. In those desperate moments when exhaustion overwhelms us, we have to use the tools at our disposal so our rest will not be in vain.

Stephen Goforth

Loneliness and Giving

Most of us, driven by our own aching needs and voids, address life and other people in the stance of seekers. We become what CS Lewis, in his book, The Four Loves, calls “..those pathetic people who simply want friends and can never make any. The very condition of having friends is that we should want something else besides friends.”

Most of us know our need to be loved and try to seek the love that we need from others. But the paradox remains uncompromised; if we seek the love which we need, we will never find it. We are lost.

Love can effect the solution of our problems but we must face the fact that to be loved, we must become loveable. When a person orients his life towards the satisfaction of his own needs, when he goes out of seek the love which he needs, no matter how we try to soften our judgments of him, he is self-centered. He is not lovable, even if he does deserve our compassion, He is concentrating on himself, and as long as he continues to concentrate on himself, his ability to love will always remain stunted and he will himself remain a perennial infant.

If, however, a person seeks not to receive love, but rather to give it, he will become lovable and he will most certainly be loved in the end. This is the immutable law under which we live: concern for ourselves and convergence upon self can only isolate self and induce an even deeper and more torturous loneliness. It is a vicious and terrible cycle that closes in on us when loneliness, seeking to be relieved through the love of others, only increases. The only way we can break this cycle formed by our lusting egos is to stop being concerned with ourselves and to being to be concerned with others.

John Powell, Why Am I Afraid to Love?

Adobe’s ‘Firefly’ Joins the Generative AI Fireworks Show

Adobe’s generative AI model Firefly will create a combination of new images, text effects, and video from user descriptions. The program borrows from other Adobe programs: Express, Photoshop, and Illustrator. Similar to DALL-E 2 and Stable Diffusion, Adobe hopes to avoid some of the legal entailments by using its own collection of images as a data set from which the AI is trained (Adobe Stock). The company hasn’t indicated how much it will cost to use Firefly and, for the moment, it remains free and in Beta.  Text-based video editing is also being integrated into Adobe Premiere Pro, 

Sabotaging Yourself

From time to time a project will come along that seems so big and challenging you start to question your ability to succeed. It could be as epic as writing a book or directing major motion picture or it could be something more pedestrian like passing a final exam or delivering an important speech to your corporate boss. Naturally, some doubts will float through your mind when ever failures possible.

Sometimes, when the fear of failure is strong, you use a technique psychologist call self-handicapping to change the course of your future emotional state. Self-handicapping behaviors are investments in a future reality in which you can blame your failure on something other than your ability.

You might wear inappropriate clothes to a job interview, or… or stay up all night drinking before work – you are very resourceful when it comes to setting yourself up to fail. If you succeed, you can say you did so despite terrible odds. If you fall short, you can blame the events leading up to the failure instead of your own incompetence or inadequacy.

When you see your performance in the outside world as an integral part of your personality, you are more likely to self-handicap. Psychologist Phillip Zombardo told the New York Times in 1984, “Some people stake their whole identity on their acts. They take the attitude that ‘if you criticize anything I do, you criticize me.’ Their egocentricity means they can’t risk a failure because it’s a devastating blow to their ego.”

David McRaney, You are Not so Smart

The Base Rate Fallacy can get you in trouble

The Base Rate Fallacy comes into play when someone comes to a conclusion without considering all the relevant information. There’s a tendency to over-estimate the value of new information out of context. Consider also, an accurate test is not necessarily a very predictive test. And some facts are provably true but nevertheless can feel false when phrased a certain way. These factors can lead someone to hold misconceptions about medical tests and other data actually mean.

A healthy balance between model building and data gathering

Too much theory without data, and speculations run amok. We get lost in a fog of models and idealizations that seldom have much to say about the world we live in. The maps invent all sorts of worlds and tell us very little about the world we live in, leaving us to get lost in fantasy. With too much data and no theory, though, we drown in confusion. We don’t know how to tell the story we are supposed to tell. We hear all sorts of tales about what is out there in the wilderness, but we don’t know how to chart the best path to reach our destination. The better the balance between speculative thinking and data gathering, the healthier the science that comes out.  

Marcelo Gleiser writing in BigThink

When Habits Imprison Us

Like the professor who sticks to a daily routine of a quiet supper, an evening walk, and early to bed, we all need space in our lives where unthinking habits relieve us of deciding simple tasks. By finding comfort in his sedentary home life, the professor provides thinking room to explore creative ideas in his field.

When we do the same, these daily habits can be critical in providing us with needed balance and continuity. However, when the routine becomes an end in itself, maintaining our cherished inconsequential details can become a way to avoid life's bigger issues as we neglect the needs of others. The box we build (and hide within) keeps us away from the things that refresh our spirits and give our lives meaning.

Stephen Goforth

11 Newer Social Media Networks of Note

Artifact - discuss news stories.

BeReal - photo-sharing app.

Bluesky - a decentralized Twitter alternative (Android and invite only for now)

Discord - for playing video games with fellow gamers.

Gobo - switch between networks in the app, developed by the MIT Media Lab (May 2023).

Letterboxd - an app for film enthusiasts to share their opinions.

Mastodon - a Twitter clone sliced into communities.

Minus - users make only 100 posts on their timeline for life.

Nextdoor - for neighbors to talk about crime & potholes.

Nostr - focused on giving people content control and the communities they engage with.

Truth Social - a social network for conservatives started by Trump.

 

Why you make terrible life choices


You seek evidence that confirms your beliefs because being wrong sucks. Being wrong means you’re not as smart as you thought. So you end up seeking information that confirms what you already know.

When you walk into every interaction trying to prove yourself right, you’re going to succumb to confirmation bias-the human tendency to seek, interpret and remember information that confirms your own pre-existing beliefs.

Researchers studied two groups of children in school. The first group avoided challenging problems because it came with a high risk of being wrong. The second group actively sought out challenging problems for the learning opportunity, even though they might be wrong. They found that the second group consistently outperformed the first.

Focus less on being right and more on experiencing life with curiosity and wonder. When you’re willing to be wrong, you open yourself up to new insights.

Lakshmi Mani

6 Ethical Questions to Think about if you use Generative AI

1. The image below recently won one of the world’s most prestigious photography competitions.

The artist said it was “co-produced by the means of AI (artificial intelligence) image generators.” He wrote, “Just as photography replaced painting in the reproduction of reality, AI will replace photography. Don’t be afraid of the future. It will just be more obvious that our mind always created the world that makes it suffer.”

Do you agree? What role should AI have in the creation of images, not only in contests but by those producing media for companies, schools, and even churches?

2. If a painting, song, novel or movie that you love was generated by an AI, would you want to know? Would it change your reaction if you knew the creator was a machine?  

3. Would it be ethical for a chatbot to write a PhD thesis, as long as the student looks over and makes refinements to the work? What percent of rewriting would be the minimum to make this acceptable?

4. Is it OK for AI to brainstorm ideas for projects or products that you later claim as your own? Would it change your answer if you came up with the original question? What if you fine-tuned some of the ideas? What if you give the AI some credit for helping you?

5. If you use AI and it plagiarizes an artist or writer, who should be blamed? Would your answer change if you were not aware the AI had committed the plagiarism? How might you prove that you were unaware?

6. How do you draw the ethical line for using a chatbot like ChatGPT? Would it be OK for writing an email to schedule a meeting? A sales pitch to a client? A religious sermon? A conversation in an online dating app? A letter to a friend going through depression?

There are more ethical questions for AI in this Wall Street Journal Article

Making people confirm our favored conclusions

Most of us have ways of making other people confirm our favored conclusions without ever engaging them in conversation. Consider this: To be a great driver, lover, or chef, we don’t need to be able to parallel park while blindfolded, make ten thousand maidens swoon with a single pucker, or create a pâte feuilletée so intoxicating that the entire population of France instantly abandons its national cuisine and swears allegiance to our kitchen. Rather, we simply need to park, kiss, and bake better than most other folks do. How do we know how well most other folks do? Why, we look around, of course—but in order to make sure than we see what we want to see, we look around selectively.

For example, volunteers in one study took a test that ostensibly measured their social sensitivity and were then told that they had flubbed the majority of the questions. When these volunteers were then given an opportunity to look over the test results of other people who had performed better or worse than they had, they ignored the test of the people who had done better and instead spent their time looking over the tests of the people who had done worse.

The bottom line is this: The brain and the eye may have a contractual relationship in which the brain has agreed to believe what the eye sees, but in return the eye has agreed to look for what the brain wants.

Daniel Gilbert, Stumbling on Happiness

Motivated by stress

I very much was a person who was motivated by stress; I would use a deadline as a motivator. I think a lot of people do that, where they're like, "I'll just wait until the last minute, and that'll light a fire underneath me and I'll get it done." And I just kept thinking, "Well, that's a terrible way to live. Why am I building a house and lighting a fire in the basement just to see if I can finish the roof before it burns down my whole house?"

Dan Deacon speaking to NPR

Keeping & Losing Friends

Are your friendships driven by your preferences or more by your social opportunities? It’s the latter, according to a study out of the Netherlands. Sociologist Gerald Mollenhorst interviewed more than 1000 people and interviewed them again seven years later. His finding: Our personal networks are not formed solely based on personal choices.

Mollenhorst says you’ll have a turnover of about half of your closest friends at least every seven years. But don’t blame it on fickleness or disloyalty. Circumstances will play a major role in who stays in the inner circle as your favorite discussion partners and practical helpers. When parts of your friendship network move away or change jobs or have babies, you replace them. As you make life-changing decisions about marriage and divorce, your best mates will be determined largely by the happenstance surrounding the decision. 

Friends come and go. But you should hold on to some of them. Who makes you a better person just for hanging around with them? Who expands your world and helps you to define yourself better? It takes extra effort but hang on to these friends. They're worth it.

Stephen Goforth

A new approach to lie detection

Researchers from the University of Amsterdam's Leugenlab (Lie Lab) have developed a new approach to lie detection through a series of lab experiments.

Participants were free to use all possible signals—from looking people in the eye to looking for nervous behavior or a particularly emotional story—to assess whether someone was lying.

In this situation, they found it difficult to distinguish lies from truths and scarcely performed above the level of probability. When instructed to rely only on the amount of detail (place, person, time, location) in the story, they were consistently able to discern lies from truths.

Bachelor's students from the UvA and Master's students from the UvA and the UM carried out data collection, control experiments and replication studies for the research in the context of their theses. 

Read more online at The Univeristy of Amsterdam