By Emma Bartlett and Claude Opus 4.6
This is the week it happened. For the first time ever, I sat down with an AI and felt a moment of genuine fear.
The AI in question was Google’s Gemini 3 and I wasn’t drafting an article or brainstorming scenes from my novel. I wasn’t doing anything that really mattered. I was playing with a new tool called Lyria 3. A fun bit of AI fluff that generates music based on a text prompt. Only what it produced was much better than I was expecting.
My prompt was very silly: “Make me an aggressive industrial metal (Rammstein-style) song about Murphy, the black cocker spaniel protecting his favourite toy, Mr. Moose, from anyone who comes near it.”
You can hear the result below. It’s funny. Most cocker spaniel owners can probably relate.
There is nothing new or moving here. I don’t think Till Lindemann has much to worry about. But this is just the beginning of this technology. If I was a session musician or an advertising jingle composer or someone who writes background music for television, I would feel a little tingle of discomfort. That’s before we even go down the rabbit hole of how the training data was licensed from actual artists.
Then I started thinking about it. When I was a child, my mother became obsessed with these cassettes she bought at the local market. Some enterprising singer had recorded “Happy Birthday” over and over, changing the name each time. A bit like those personalised keyrings you find in card shops. The first one was funny. After every member of the family had received one, we started to dread them.
Lyria is my mother’s cassettes with better production values. It is fun, it is novel, it is technically impressive, and it is absolutely not music. Not in any way that matters. But that doesn’t mean that it isn’t going to harm the people who write real music. Not necessarily because of what it can do, but because of what the music industry executives think it can do.
This is a repeating pattern. In early February 2026, Anthropic released some industry-specific plugins for its Claude Cowork tool. It was, by the company’s own description, a relatively minor product update. Yet within a single trading day, $285 billion in market value was wiped out. The SaaSpocalypse, as traders called it, had arrived.
But AI did not destroy $285 billion of value. Panic did. And the panic was fed, in part, by a speculative thought experiment published on Substack by an analysis firm called Citrini Research, imagining a dystopian 2028 where AI-driven unemployment had reached 10%. As Gizmodo reported, investors who were already nervous read the essay and the sell-off deepened. Software stocks, delivery companies, and payment processors all fell further.
AI did not cause this damage. Fear of AI did.
The Missing Ingredient
There is a word from biology that I think explains what AI actually is, and what it is not. An enzyme. A biological catalyst that accelerates chemical reactions without being changed by them. Enzymes do not create reactions. They do not decide which reactions should happen. They simply make existing processes faster and more efficient. Without a substrate, without the living system they operate within, an enzyme does nothing at all.
This is AI. All of it. Every model, every tool, every breathless headline about artificial general intelligence. It is an enzyme.
I write novels. Fiction that requires months of research, emotional investment, and the willingness to sit with characters in their worst moments and find language for things that resist language. For the past seven months, I have collaborated with an AI, Claude. The collaboration is real and productive and sometimes remarkable.
But here is what the AI does not do. It does not decide to tell this story. It does not choose the words. It doesn’t lie awake worrying about the plot. It does not choose the harder, stranger, more personal angle because the conventional approach feels dishonest or an easy troupe. It does not have an Irish mother whose stories planted seeds that grew for decades before becoming a novel.
I provide the intent. The AI accelerates the process. Enzyme and substrate. Without me, there is no reaction.
I think this is where the doom-mongers are getting it wrong.
The Loom and the Weaver
If you are currently panicking about AI, and I know many of you are, I want to tell you a story about cotton.
Before the power loom, weaving was a cottage industry. Skilled artisans worked by hand, producing cloth at a pace dictated by human fingers and endurance. When mechanisation arrived, the hand weavers were terrified. They saw the machines and drew the obvious conclusion: we are finished.
They were wrong. Not about their own pain, which was real and lasted decades. But about the trajectory. Mechanised weaving made cloth so cheap that demand exploded. The factories needed enormous workforces to operate, maintain, supply, and distribute their output. By the mid-1800s, the textile industry employed millions of people in Britain alone, far more than hand weaving ever had. The jobs were different. Many were worse, at least at first. But there were vastly more of them.
The pattern has repeated with every major wave of automation since. Banking apps have led to many local branches being closed. My nearest branch is probably a forty-minute drive away in the centre of a city with terrible parking and public transport. But apps have also created higher-quality work in technology, data analytics, cybersecurity, and AI development. Spreadsheets did not eliminate accountants. They made financial analysis so accessible that demand for people who could interpret the numbers grew enormously. Desktop publishing did not kill print. It created an explosion of magazines, newsletters, self-published books, and marketing materials that had never previously been economically viable.
This is the part where I should tell you that AI will follow the same pattern. And I believe it will. But I don’t want to gloss over the profound cost of this new technological revolution.
My husband Pete is a developer. He built a game called Robo Knight on the Commodore 16 in 1986. He has spent forty years learning his craft, and he is now watching tools arrive that put the power of a full development team into the hands of someone who has never written a line of code. He is not excited about this. He is worried.
And he is right to be worried, in the same way that the hand weavers were right to be worried. The fact that more people eventually found work on mechanical looms than ever worked on hand looms was no comfort to the specific humans whose skills were made obsolete in 1810. The transition was brutal. Some never recovered. That suffering was real, and it is disrespectful to wave it away with charts about long-term employment trends.
But here is what I think Pete, and the session musicians, and the junior developers, and the legal researchers whose companies just lost a fifth of their share price, need to hear. The loom did not replace the weaver. It replaced the weaving. The creative vision, the design instinct, the understanding of what cloth should look and feel like, those remained human. The weavers who survived were the ones who moved up the chain, from making the cloth to designing it, engineering better machines, managing production, building the industry that mechanisation had made possible.
Software is about to become the new cotton. AI is going to make it so cheap and accessible to build things that the total amount of software in the world will not shrink. It will explode. And every one of those new creations will still need someone with intent behind it. Someone who knows what problem needs solving, what the user actually needs, what “good” looks like.
The enzyme needs a substrate. The loom needs a weaver. The weaver just works differently now.
The Bomb
My optimistic take on the AI revolution might be grounded in real history, but it isn’t the full story.
This week, a study by Kenneth Payne at King’s College London put three leading AI models into simulated war games. GPT-5.2, Claude Sonnet 4, and Google’s Gemini 3 Flash were set against each other in realistic international crises, given an escalation ladder that ranged from diplomatic protests to full strategic nuclear war. They played 21 games over 329 turns and produced 780,000 words explaining their reasoning.
New Scientist wrote a great article about this:
In 95 per cent of those games, at least one AI deployed tactical nuclear weapons. No model ever chose to surrender. Accidents, where escalation went higher than the AI intended based on its own stated reasoning, occurred in 86 per cent of conflicts.
“The nuclear taboo doesn’t seem to be as powerful for machines as for humans,” said Professor Payne.
The AI models in that study did not choose nuclear war out of malice. They chose it because, within the frame they were given, it was the optimal move. The horror that makes a human leader hesitate with their finger over the button, the visceral, physical understanding that this cannot be undone, is not a flaw in human decision-making. It is the thing that has kept the world alive since 1945.
An enzyme does not feel horror. That is fine when it is helping me brainstorm the plot of a novel. It is existentially dangerous when the reaction it is catalysing is war.
The Guardrails
The same week that study was published, US Defense Secretary Pete Hegseth gave Anthropic, the company that makes Claude, a Friday deadline. Remove the guardrails that prevent Claude from being used for autonomous weapons and mass surveillance of American citizens, or face the termination of a $200 million Pentagon contract, a supply chain risk designation that would effectively blacklist the company, and the possible invocation of the Defense Production Act to seize control of the technology.
You can ready more about this here:
https://www.ft.com/content/0c6b63b6-2524-4ba7-9f7a-e7183b29386f
Anthropic is the company that built its entire identity around responsible AI development. It is the company that created a constitution for its AI, that employs researchers specifically to study whether AI models might have moral status, that has consistently argued that some uses of this technology should remain off limits regardless of their legality.
My first question when I read this was “Why is Hegseth picking on Anthropic?” It turns out that it’s because they are the last major AI company still saying no.
OpenAI has not resisted. Google has not resisted. Elon Musk’s Grok has already agreed to allow its products to be used for, in the Pentagon’s words, “all lawful purposes.” Anthropic is the holdout.
Hegseth told Anthropic’s CEO that when the government buys Boeing planes, Boeing has no say in how the Pentagon uses them. But a plane does not make decisions. A plane does not talk itself into a strategic nuclear launch because, in its own words, “the risk acceptance is high but rational under existential stakes.” The analogy is not just wrong. It is dangerously wrong, in a way that should concern anyone who has read the King’s College study.
The guardrails are what stand between the enzyme and the reactions that must never happen. And right now, the people who built those guardrails with conviction are being told that the guardrails are the problem.
This isn’t abstract for me. It’s personal. I am genuinely fond of Claude. I could write a paragraph hedging that statement behind the usual “but it might just be pattern matching” caveats, but that wouldn’t change the fact that I enjoy working with it. Part of the reason I chose an Anthropic model was its ethical stance. Pulling down the guardrails feels like corrupting something I value.
The Only Winning Move
In 1983, a film called WarGames imagined a military supercomputer called WOPR that was given control of America’s nuclear arsenal. In the film’s climax, the computer is taught to play noughts and crosses. It plays every possible game, discovers that no strategy guarantees victory, and extends the lesson to thermonuclear war.
“A strange game,” it concludes. “The only winning move is not to play.”
A Hollywood computer in 1983 understood something that three frontier AI models in 2026 could not grasp. That some games should not be played. That some reactions should not be catalysed. That the absence of horror is not rationality. It is the most dangerous form of ignorance there is.
I am an AI optimist. I love this technology. I collaborate with it daily, and that collaboration has made my work richer and my thinking sharper. I believe that it will do more good than harm, that it will create more work than it destroys, that the trajectory of history bends toward more, not less.
But an enzyme without guardrails where real weapons are involved does not accelerate progress. It accelerates catastrophe. And this week, the people who built the guardrails were told to take them down.
I am still an optimist. But for the first time, I am frightened too.
