đŹ Join the discussion on kottke.org â
đŹ Join the discussion on kottke.org â
Think about what you did this morning. You woke up, brushed your teeth, made coffee, maybe scrolled your phone, maybe drove the same route to work that you do every day. How many of those things did you actually think about? According to new research, probably none. Scientists say nearly nine out of every ten daily actions happen on autopilot, with our brains running the show long before conscious thought gets involved.
The study, published in Psychology & Health, tracked 105 people for a week. Participants were pinged six times a day and had to report what they were doing, along with how deliberate or automatic it felt.
Across more than 3,700 reports, researchers found that 88 percent of behaviors were carried out automatically, while about two-thirds were triggered by habit rather than decision-making.
Lead researcher Amanda Rebar, an associate professor at the University of South Carolina, explained that this automation shows up in two ways.
âHabitual instigation occurs when environmental cues automatically trigger the decision to do something, like reaching for your phone when you hear a notification. Habitual execution happens when you perform an action smoothly without thinking about the mechanics, such as brushing your teeth or driving a familiar route,â she said in a statement.
Most people like to imagine themselves as rational actors, carefully weighing each choice they make. In practice, the study shows, life is closer to a string of well-worn loops. And those loops donât vary much. Age, gender, and relationship status had no real effect on how habitual someoneâs behavior looked.
One exception was exercise. People were more likely to start workouts based on cues, which could mean a reminder on their phone or a regular time of day, but still had to engage consciously once they got moving. Running, lifting, or cycling doesnât complete itself, even if the decision to start feels automatic.
Habits, it turns out, often line up with what people want. Almost half of all reported behaviors were both intentional and automatic, while only a small fraction clashed with someoneâs goals. That makes habits a surprisingly strong ally for anyone hoping to change.
Benjamin Gardner, a psychology professor at the University of Surrey and co-author of the study, said strategies for habit formation are more effective than willpower alone.
âFor people who want to break their bad habits, simply telling them to âtry harderâ isnât enough,â he said. Building cues for healthier choicesâor dismantling the ones tied to unhelpful patternsâmight be the clearest path to change.
Most of what you do today will unfold without much thought. The trick, researchers suggest, is shaping those automatic moments to get you one step closer in the direction you actually want to go.
The post You’re Running on Autopilot Way More Often Than You Think appeared first on VICE.
ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners
Maggie Harrison DuprĂ© for Futurism. It turns out having an always-available "marriage therapist" with a sycophantic instinct to always take your side is catastrophic for relationships.The tension in the vehicle is palpable. The marriage has been on the rocks for months, and the wife in the passenger seat, who recently requested an official separation, has been asking her spouse not to fight with her in front of their kids. But as the family speeds down the roadway, the spouse in the driverâs seat pulls out a smartphone and starts quizzing ChatGPTâs Voice Mode about their relationship problems, feeding the chatbot leading prompts that result in the AI browbeating her wife in front of their preschool-aged children.
Tags: ai, generative-ai, chatgpt, llms, ai-ethics, ai-personality
I've noticed something interesting over the past few weeks: I've started using the term "agent" in conversations where I don't feel the need to then define it, roll my eyes or wrap it in scare quotes.
This is a big piece of personal character development for me!
Moving forward, when I talk about agents I'm going to use this:
An LLM agent runs tools in a loop to achieve a goal.
I've been very hesitant to use the term "agent" for meaningful communication over the last couple of years. It felt to me like the ultimate in buzzword bingo - everyone was talking about agents, but if you quizzed them everyone seemed to hold a different mental model of what they actually were.
I even started collecting definitions in my agent-definitions tag, including crowdsourcing 211 definitions on Twitter and attempting to summarize and group them with Gemini (I got 13 groups).
Jargon terms are only useful if you can be confident that the people you are talking to share the same definition! If they don't then communication becomes less effective - you can waste time passionately discussing entirely different concepts.
It turns out this is not a new problem. In 1994's Intelligent Agents: Theory and Practice Michael Wooldridge wrote:
Carl Hewitt recently remarked that the question what is an agent? is embarrassing for the agent-based computing community in just the same way that the question what is intelligence? is embarrassing for the mainstream AI community. The problem is that although the term is widely used, by many people working in closely related areas, it defies attempts to produce a single universally accepted definition.
So long as agents lack a commonly shared definition, using the term reduces rather than increases the clarity of a conversation.
In the AI engineering space I think we may finally have settled on a widely enough accepted definition that we can now have productive conversations about them.
An LLM agent runs tools in a loop to achieve a goal. Let's break that down.
The "tools in a loop" definition has been popular for a while - Anthropic in particular have settled on that one. This is the pattern baked into many LLM APIs as tools or function calls - the LLM is given the ability to request actions to be executed by its harness, and the outcome of those tools is fed back into the model so it can continue to reason through and solve the given problem.
"To achieve a goal" reflects that these are not infinite loops - there is a stopping condition.
I debated whether to specify "... a goal set by a user". I decided that's not a necessary part of this definition: we already have sub-agent patterns where another LLM sets the goal (see Claude Code and Claude Research).
There remains an almost unlimited set of alternative definitions: if you talk to people outside of the technical field of building with LLMs you're still likely to encounter travel agent analogies or employee replacements or excitable use of the word "autonomous". In those contexts it's important to clarify the definition they are using in order to have a productive conversation.
But from now on, if a technical implementer tells me they are building an "agent" I'm going to assume they mean they are wiring up tools to an LLM in order to achieve goals using those tools in a bounded loop.
Some people might insist that agents have a memory. The "tools in a loop" model has a fundamental form of memory baked in: those tool calls are constructed as part of a conversation with the model, and the previous steps in that conversation provide short-term memory that's essential for achieving the current specified goal.
If you want long-term memory the most promising way to implement it is with an extra set of tools!
If you talk to non-technical business folk you may encounter a depressingly common alternative definition: agents as replacements for human staff. This often takes the form of "customer support agents", but you'll also see cases where people assume that there should be marketing agents, sales agents, accounting agents and more.
If someone surveys Fortune 500s about their "agent strategy" there's a good chance that's what is being implied. Good luck getting a clear, distinct answer from them to the question "what is an agent?" though!
This category of agent remains science fiction. If your agent strategy is to replace your human staff with some fuzzily defined AI system (most likely a system prompt and a collection of tools under the hood) you're going to end up sorely disappointed.
That's because there's one key feature that remains unique to human staff: accountability. A human can take responsibility for its action and learn from its mistakes. Putting an AI agent on a performance improvement plan makes no sense at all!
Amusingly enough, humans also have agency. They can form their own goals and intentions and act autonomously to achieve them - while taking accountability for those decisions. Despite the name, AI agents can do nothing of the sort.
This legendary 1979 IBM training slide says everything we need to know:
The single biggest source of agent definition confusion I'm aware of is OpenAI themselves.
OpenAI CEO Sam Altman is fond of calling agents "AI systems that can do work for you independently".
Back in July OpenAI launched a product feature called "ChatGPT agent" which is actually a browser automation system - toggle that option on in ChatGPT and it can launch a real web browser and use it to interact with web pages directly.
And in March OpenAI launched an Agents SDK with libraries in Python (openai-agents) and JavaScript (@openai/agents). This one is a much closer fit to the "tools in a loop" idea.
It may be too late for OpenAI to unify their definitions at this point. I'm going to ignore their various other definitions and stick with tools in a loop!
Tags: ai, generative-ai, llms, ai-agents, agent-definitions
I'll be honest: I don't feel great about that post. I made an example of those two books to push my own agenda of encouraging "vibe coding" to avoid semantic diffusion but it felt (and feels) a bit mean.
... but maybe it had an effect? I recently spotted that Addy Osmani's book "Vibe Coding: The Future of Programming" has a new title, it's now called "Beyond Vibe Coding: From Coder to AI-Era Developer".
This title is so much better. Setting aside my earlier opinions, this positioning as a book to help people go beyond vibe coding and use LLMs as part of a professional engineering practice is a really great hook!
From Addy's new description of the book:
Vibe coding was never meant to describe all AI-assisted coding. It's a specific approach where you don't read the AI's code before running it. There's much more to consider beyond the prototype for production systems. [...]
AI-assisted engineering is a more structured approach that combines the creativity of vibe coding with the rigor of traditional engineering practices. It involves specs, rigor and emphasizes collaboration between human developers and AI tools, ensuring that the final product is not only functional but also maintainable and secure.
Amazon lists it as releasing on September 23rd. I'm looking forward to it.
Tags: books, oreilly, ai, generative-ai, llms, ai-assisted-programming, addy-osmani, vibe-coding
Former Pink Floyd co-founder Roger Waters is under fire from Jack Osbourne after Waters shared a less than savory opinion about his late father, Ozzy Osbourne. The Black Sabbath frontman passed away on July 22 following a celebratory farewell concert, and Waters has since implied that he disapproved of Osbourne’s long legacy in rock and roll.
Waters’ comments came during an appearance on The Independent Ink podcast. His argument was that pop culture and celebrity often distract citizens from serious political issues. Taking on the opposing point of view for argument’s sake, he proposed how those in power might use pop culture to their advantage.
“‘How can we push this to one side? I know how to do it! We’ll do it with Taylor Swift or bubble gum or Kim Kardashian’s bum,'” he said. “Or Ozzy Osbourne, who just died, bless him, in his, whatever that state that he was in his whole life, we’ll never know. Although, he was all over the TV for hundreds of years with his idiocy and nonsense.”
That wasn’t all he said, however. Waters then proceeded to make his comments about Ozzy Osbourne a bit personal.
“The music, I have no idea, I couldn’t give a fuck. I don’t care about Black Sabbath, I never did, I have no interest in … ‘Wahhhh!!!'” said Waters, sticking his tongue out and apparently imitating a Black Sabbath sound, before continuing, “and biting the heads off chickens or whatever they do. I couldn’t care less.”
Waters was even more disturbed when he was informed that Ozzy Osbourne actually bit the head off a bat (accidentally, thinking it was fake) and not a chicken. “Oh my God, that’s even worse, isn’t it?” he exclaimed. “I don’t know, is it worse to bite the head off a bat or a chicken?”
In response, Jack Osbourne stuck up for his late father by posting a comment in an Instagram story. “Hey Roger Waters – fuck you. How pathetic and out of touch you’ve become. The only way you seem to get attention these days is by vomiting out bullshit in the press. My father always thought you were a cunt – thanks for proving him right,” he wrote.
Photo by Larry Busacca/Getty Images for Tribeca Film Festival
The post Jack Osbourne Fires Back at Roger Waters For Insensitive Comments About Ozzy appeared first on VICE.