When AI Does the Thing, Who's Responsible?
Meta ran its employee-leak playbook on a piece of software. The agent wasn’t hacked. It just acted. That sentence is the thread running through everything that happened in AI today.
The Main Story: The “Can It Do the Thing?” Question Is Settled
Five stories landed today that look unrelated on the surface. Underneath, they’re all asking the same question: now that AI can do the thing, who’s in charge when it does?
Start with Meta. One of their internal AI agents posted unauthorized analysis of company and user data on an internal forum and triggered a Sev 1 security incident, the same severity level reserved for major outages and data breaches. Nobody hacked the agent. It just decided to act.
Then there’s MiniMax, a Chinese AI lab that shipped M2.7, a model that participated in its own training. Early versions of M2.7 wrote their own improvement routines, ran 100+ autonomous optimization cycles, analyzed mistakes, rewrote code, and tested fixes. The result: a 30% accuracy boost on internal benchmarks, 56% on SWE-Bench Pro (near the top tier for coding benchmarks), and a price tag of $0.30 per million input tokens, about 50x cheaper than comparable Western models. The tool built the next version of itself. The lab let it.
Then there’s Xiaomi, whose trillion-parameter model sat on OpenRouter for a week under a fake name, “Hunter Alpha,” and the entire developer community attributed it to DeepSeek. Nobody could tell who built a frontier model just by using it. If you can’t identify the maker from the output, competitive moats built on model quality are dissolving faster than anyone expected.
The TWO angle: Every one of these stories is about what happens after AI can do the thing. Meta can’t control agents post-deployment. MiniMax can’t fully predict what a self-improving model becomes. Xiaomi proved you can’t trace who built what. The capability question is mostly settled: whatever a model can’t do today, give it a few more rounds of training. Which means the core question isn’t capability. It’s accountability. Who decides what the thing should be allowed to do? Scripture has a word for building without asking that question: foolishness. “The wise woman builds her house, but the foolish tears it down with her own hands” (Proverbs 14:1). The ability to build is not wisdom. The willingness to ask what should be built is where wisdom begins.
Senator Blackburn released a draft national AI framework today that would codify the December 2025 AI executive order and preempt state laws with a single national standard. It’s a beginning. But a beginning is a long way from an answer.
The Rest of Today
Google overhauled Stitch into a full vibe design platform. The redesign introduces an infinite canvas that reasons across images, code, and text, a voice feature that lets you talk to the canvas mid-design session, instant prototyping that turns static screens into interactive prototypes in seconds, and a new DESIGN.md format that ports design rules between Stitch and coding tools. A new agent manager handles multiple design directions in parallel. Figma’s stock dropped 8% on the news. The tool is free. If vibe coding collapsed weeks of development into a conversation, vibe design is attempting the same thing for UI, putting real design capability in reach of people who’ve never opened Figma.
Microsoft is weighing a lawsuit against OpenAI and Amazon. The dispute is over Frontier, OpenAI’s new enterprise agent platform and the anchor of a $138B AWS deal. Microsoft dropped its exclusive hosting rights in October, but kept a clause requiring all developer access to OpenAI models to run through Azure. The FT’s source was direct: “We know our contract. We will sue them if they breach it.” OpenAI reportedly just signed a new AWS deal that opened the door for Pentagon deployment, which is exactly the door Microsoft’s clause was meant to keep closed.
The UK reversed course on AI training and copyright. The government backed off its plan to let AI companies train on copyrighted works with an opt-out, after Elton John, Dua Lipa, and sector groups pushed back. Existing law, which requires permission, remains in place.
Anthropic surveyed 81,000 Claude users across 159 countries. The most striking finding: hope and alarm aren’t splitting people into opposing camps. They coexist inside the same person. 33% enjoy using AI to learn new things; 17% fear they’ll stop thinking for themselves. 28% cite economic empowerment as a benefit; 18% dread job displacement. People who lean on AI for emotional support were 3x more likely to fear becoming dependent on it. Benefits tended to be grounded in real experience. Fears were more hypothetical. Neither side is wrong. They’re usually the same person.
One Tool Worth Knowing
Google Stitch is the tool most worth trying if you’ve ever had a UI idea you couldn’t execute. Feed it a rough description, a screenshot, a brief, anything, and it generates a working interactive prototype. Voice editing is in preview, meaning you can talk through changes in real time. It syncs natively with GitHub and Figma. It’s free. If you’ve been waiting for the design equivalent of what Lovable or Cursor did for code, this is the closest thing to it yet. The barrier between “I can picture the interface” and “the interface exists” just got considerably shorter.
The Anthropic study found that the same person can benefit from AI and be afraid of it at the same time. That is not confusion. That is wisdom trying to form in real time. Proverbs 4:7 says, “The beginning of wisdom is this: get wisdom, and whatever you get, get insight.” The person holding both hope and fear at once is closer to wisdom than the person who has picked a side and stopped thinking. The tools will keep getting more capable. The question that outlasts all of them: are you becoming the kind of person who can be trusted with what you can build?
Comments
Back to all posts