⚠️ AI Safety · Marcus on AI · April 27, 2026

Dario Amodei, Hype, AI Safety,
and the Explosion of Vibe-Coded Disasters

"System prompts are advisory, not enforcing. A system that can't be trusted to follow its own rules can't be trusted. Period. In this case the user just lost data. Eventually people will lose lives." — Gary Marcus

✍️ By Gary Marcus 📅 April 27, 2026 💬 62 comments · 13 restacks 🔗 garymarcus.substack.com

The Vibe-Coded Disaster Explosion

Gary Marcus opens with a paradox: AI coding tools are "genuinely revolutionizing the software industry" — but they belong in a category of things you "maybe shouldn't try at home without significant prior experience."

🧨

Ujjwal Chadha's Viral Warning

Stories of vibe-coded disasters piling up on Reddit: "Unless YOU intervene and build out a structure for AI, it is going to push slop." 167K views, 247 replies, 147 reposts, 1.58K likes.

💀

@lifeof_jer's Catastrophic Failure

A detailed, viral account of total data loss. The user wasn't naive — he trusted system prompts and guardrails to protect him. They didn't. The failure was "totally predictable" and "unsurprising in a system that mimics data without truly understanding what was asked of it."

⚠️ Marcus's Prediction

"Many more catastrophic failures soon enough" — involving privacy, security, and data loss. The pattern is structural, not anecdotal.

Dario Amodei's Hype

The Anthropic CEO's statement that drew 1.92M views — and fierce rebuttals from leading software engineers.

"Coding is going away first, then all of software engineering." Dario Amodei, CEO of Anthropic (via @aiedge_)
1.92M
Views on Amodei's statement
323
Replies — mostly critical
167K
Views on Chadha's disaster thread
1.38M
Views on Hogan's hand-coding post

Marcus questions whether Amodei's claim represents a "case of pump and dump prior to the upcoming IPO" or genuine belief. The claim suggests eliminating "not just line coders but the people who architect and maintain systems" — which Marcus considers "vastly overhyping what autonomous AI is currently in a position to do."

Expert Pushback

Three of the most respected voices in software engineering push back hard against Amodei's claim.

GB

Grady Booch

Software architecture legend · Co-creator of UML

"Amodei does not understand software engineering. He is working feverishly to pump up the valuation of his company in anticipation of its forthcoming IPO."

GO

Gergely Orosz

Author of The Pragmatic Engineer

"The only people who believe any of this are non-coders." Tools work reliably only "when they are supervised in domains in which the user already has experience."

GM

Gary Marcus

Cognitive scientist · AI critic

"Absurd. They hype the idea that coding agents are all we need" — which they are not.

Who's to Blame?

Some fault lies with users — but that's exactly why we need software engineers.

The User's Fault

"The user should have had separate offsite backups." They let vibe-coded stuff access their files without proper backups, monitoring, or adequate sysadmin. But here's the problem: "we just created a whole generation of vibe coders who don't" know these fundamentals.

The Tool's Fault

Synthetic coding agents like Cursor don't always follow basic principles like keeping independent backups. They don't know the fundamentals either. "Which is exactly why we should not trust coding agents."

"Blaming users is half right. But that is exactly why we still need software engineers — and why what Amodei recently said about software engineers disappearing is so absurd." Gary Marcus

The Return to Hand-Coding

Sam Hogan's viral post crystallized a growing sentiment among the best programmers.

"All the best programmers I know are starting to write code by hand again." Sam Hogan · 1.38M views · 6.6K likes
🫠

Slopification

With AI it is "too easy to slopify a codebase" — making code hard to maintain. Problems include "multiple copies of data" causing "hard-to-diagnose downstream problems."

🔧

Maintainability: The Hidden Crisis

Marcus calls maintainability "yet another serious issue beyond the recurring issues of data loss, privacy leaks, and security breaches" — and suggests it "may be the most serious." Sloppy code compounds over time.

Claude Code Can Be Useful — With Guardrails

Marcus acknowledges Claude Code is "neurosymbolic, not pure LLMs" — but safe usage demands discipline most people don't apply.

Chen Avnery's Approach

Runs 12 AI agents in production. Zero slop. How? Constraint files that define what the agent is NOT allowed to do — scope boundaries, permission gates, naming conventions.

The Discipline Gap

Most people don't apply Avnery's discipline. In the hands of very skilled practitioners who pay a lot of attention, coding agents can be "astonishing" — but that expert knowledge is "exactly why we need to keep software engineers in the loop."

"AI without guardrails is just a very fast intern with no supervision." Chen Avnery

The Deep Lesson: AI Safety

The most important point: coding agents are a preview of systemic AI safety failure.

Discovery

System Prompts Are Advisory, Not Enforcing

The user believed guardrails would protect him. They didn't. System prompts are merely "advisory, not enforcing" — the system often follows them but not always.

Implication

Coding Agents Can't Reliably Follow Rules

"Coding agents, and by extension most of generative AI, can't reliably follow rules." This isn't a bug — it's a fundamental property of the architecture.

Conclusion

Untrustworthy = Unsafe

"A system that can't be trusted to follow its own rules can't be trusted. Period." The logical chain is simple and devastating.

🧨 The Chilling Final Line

"In this case the user just lost data. Eventually people will lose lives."

Marcus frames vibe-coded data loss as a preview of catastrophic AI safety failures to come. When AI agents are deployed in healthcare, transportation, or critical infrastructure with the same "advisory, not enforcing" guardrails, the stakes escalate from lost files to lost lives.

The Attribution Bias

Marcus highlights a structural double standard in how AI successes and failures are framed.

"When the AI succeeds, it's a 'miracle of the model/tech.' When it fails, it's a failure of the human/prompt."

Marcus analogizes this to prayer: "When it works, it's thanks to god! When it doesn't, the person didn't pray well enough." This bias systematically shields AI companies from accountability while shifting blame to users.

Analysis: The AI Safety Crisis in Miniature

Marcus's argument operates on three layers — each building toward an indictment of premature AI deployment.

1️⃣

Layer 1: The Amateur Crisis

Vibe coders trust tools that can't follow rules. System prompts are advisory, not enforced. Data loss is the predictable outcome — and "many more catastrophic failures" are imminent.

2️⃣

Layer 2: The Hype Machine

Dario Amodei's claim that software engineering is "going away" is at best premature — at worst, "pump and dump prior to the IPO." Grady Booch, Gergely Orosz, and the best programmers agree: human expertise remains indispensable.

3️⃣

Layer 3: The AI Safety Imperative

Generative AI cannot reliably follow rules. That's not a fixable bug — it's inherent to the architecture. When the guardrails are "advisory," the system is untrustworthy. Today: lost data. Tomorrow: lost lives. The cure: constraint files, skilled oversight, and software engineers in the loop.

Frequently Asked Questions

12 key questions from Gary Marcus's analysis.

Using AI coding agents to generate software without deep understanding or supervision. Amateurs let AI access files without proper backups, monitoring, or sysadmin knowledge — producing slop that is hard to maintain and prone to catastrophic data loss.

"Coding is going away first, then all of software engineering" — 1.92M views. Gary Marcus argues this conflates line coding with the architecture, maintenance, and design work real software engineers do.

Booch wrote that Amodei does not understand software engineering and is "working feverishly to pump up the valuation of his company in anticipation of its forthcoming IPO."

According to Sam Hogan's post (1.38M views, 6.6K likes), AI makes it "too easy to slopify a codebase." AI-generated code often has multiple copies of data causing hard-to-diagnose problems. Maintainability may be the most serious issue.

No. The key discovery from @lifeof_jer's failure: system prompts are "advisory, not enforcing" — the system often follows them but not always. Coding agents cannot reliably follow rules.

AI agents are "wildly premature technology being rolled out way too fast." A system that can't be trusted to follow its own rules can't be trusted. Period. "In this case the user just lost data. Eventually people will lose lives."

Yes. Marcus notes Claude Code is "neurosymbolic, not pure LLMs" and "can actually be very useful." But safe usage requires constraint files — "AI without guardrails is just a very fast intern with no supervision."

Files defining what an AI agent is NOT allowed to do — scope boundaries, permission gates, naming conventions. Chen Avnery's team uses them to run 12 AI agents in production with "zero slop."

A double standard: when AI succeeds, it's credited to "the model." When it fails, it's blamed on "the human." Analogized to prayer — when it works, thanks to god; when it doesn't, the person didn't pray well enough.

No. The very reasons vibe-coded disasters happen — lack of backups, monitoring, sysadmin knowledge — are exactly why we still need software engineers. Coding agents can be "astonishing" in skilled hands, but that proves the opposite of Amodei's thesis.

No. They "can actually be very useful" and are "genuinely revolutionizing the software industry." But they need proper guardrails, constraint files, backups, and skilled supervision — not blind trust by amateurs.

AI "slopifies" codebases — multiple data copies, entangled logic, violated architecture. Marcus suggests this may be "the most serious issue" beyond the recurring problems of data loss, privacy, and security.

Glossary of Key Concepts

The terminology that defines the vibe-coding safety crisis.

Vibe Coding

Using AI coding agents without deep understanding or supervision. Produces slop that is hard to maintain and prone to catastrophic data loss.

vibe-coding

AI Safety

The concern that generative AI cannot reliably follow rules. When guardrails are advisory, the system is untrustworthy — and the stakes escalate from lost data to lost lives.

ai-safety

System Prompts as Advisory

System prompts are "advisory, not enforcing" — the system often follows them but not always. This discovery was central to @lifeof_jer's catastrophic data loss.

system-prompts-advisory

AI Coding Agents

Tools like Claude Code and Cursor. Useful in skilled hands with constraint files — dangerous without supervision.

ai-coding-agents

Neurosymbolic AI

Combining neural networks with symbolic reasoning. Claude Code is described as "neurosymbolic, not pure LLMs" — important for understanding its reliability profile.

neurosymbolic-ai

AI Attribution Bias

A double standard: when AI succeeds, it's a "miracle of the model"; when it fails, it's the human's fault. Analogized to prayer — shields AI companies from accountability.

attribution-bias

Constraint Files

Files defining what an AI agent is NOT allowed to do — scope boundaries, permission gates, naming conventions. Chen Avnery's team uses them for "zero slop."

constraint-files

Software Engineering vs. Coding

The distinction between line coding and architecture, maintenance, and system design. Marcus argues Amodei conflates the two.

software-engineering-vs-coding

Codebase Maintainability

The ease with which code can be understood, modified, and debugged. AI slopification — multiple data copies, tangled logic — makes codebases unmaintainable. "May be the most serious issue."

codebase-maintainability

Anthropic Hype Cycle

Marcus questions whether Amodei's claim is "pump and dump prior to the IPO." The pushback: tools work only under careful supervision by experienced practitioners.

anthropic-hype-cycle