
Trust in AI code dropped from 72% to 60%. Juniors broke it
Developer favorability toward AI fell 12 points in a year. The fix is not more guardrails. The fix is not letting juniors merge what they did not read
Stack Overflow's 2025 survey saw favorability toward AI fall from 72% to 60% in a year. Trust in the accuracy of AI output collapsed from 40% to 29% over the same window. No one is talking about who caused this
It was not the model
The models got better. GPT-5, Claude Opus 4.6, Gemini 3, all measurably stronger on real benchmarks. Hallucination rates dropped. Use of tools improved. So why did confidence collapse?
Because the people who merged the diffs got worse. Cursor put the productivity of a senior in the hands of a junior and the junior shipped it without reading it. Then the production caught fire. Then the senior was blamed. Then confidence plummeted
The closed loop of garbage
A junior asks the model to write a function. The model writes the function. The junior asks the model to write a test for the function. The test passes. PR merged. Three weeks later, a customer's row is deleted because the prompt read "clean up old records" and no one defined old records
No one read the diff. No one traced the conversation. No one asked what stale meant. The model had no opinion and the human had no spine. Production paid for both
Guardrails do not fix this
Linters cannot read intent. Type systems can't read business logic. CI can't tell you that your migration is going to pollute the staging database. The only thing that solves this is a human reading the diff and pushing back if something is wrong
The fix is upstream
Stop letting people merge what they can't explain line by line. If they can't defend it in a code review, it won't get in. Senior vibe programmers do this instinctively because they've been burned before. Juniors don't. That's the gap