The gap between AI adoption and AI expertise is growing faster than most organizations want to admit.
One of our junior developers deployed a fix last month that he didn’t understand. He’d used AI to diagnose the problem and write the solution, which is something we encourage at Phidev. Using AI isn’t the issue. Deploying something you don’t understand is.
I could tell within the first five seconds, just from looking at the code, that he hadn’t understood what AI had given him. It was overly complicated, accounting for things the system didn’t need. But the bigger thing I noticed wasn’t the complexity. It was that he had no idea what the code was doing. He pushed it because the AI said it was the solution and at a simple glance it seemed to work. That was his evaluation process.
We rolled it back. Sat together. I walked him through a simpler fix, one he actually understood. He learned something that day. Lucky for everyone, we caught it early and without much consequence.
What he did isn’t unique to junior developers. It’s what happens at every level of an organization when the pressure to adopt moves faster than the expertise to evaluate. The difference is that when it happens at his level, one system breaks. When it happens at the C-suite or mandate level, the error already went out to three hundred people.
This is where the failure loop matters.
Expertise is mostly built through failure. You try something, it breaks, you understand why, you try again. AI collapses that loop. It skips the breaking part. And when you skip the breaking part, you also skip the understanding part. The loop doesn’t care about your seniority, your budget, or your adoption timeline.
Sullivan & Cromwell, one of the most elite law firms in the world, recently had to apologize to a federal judge for AI-generated hallucinations in a court filing. Misquoted statutes. Cases that didn’t exist. The firm charges $3,000 an hour. Their stated policy is “trust nothing and verify everything.” The policy failed because policy is only as good as the expertise behind it. If you don’t know what right looks like, “verify everything” is just a sentence.
Most organizations pushing AI mandates right now aren’t doing this out of carelessness. The pressure to adopt is simply outpacing the infrastructure to evaluate. Adoption becomes the goal instead of outcomes. And when adoption is the goal, the question nobody asks out loud is whether the expertise to catch the mistakes already exists at every level of the organization, or whether the processes to compensate for that gap are in place.
Most organizations don’t have either. And nobody wants to say that because it sounds like you’re against progress, or it exposes your own gaps as a leader.
Phidev is no exception. The developer story is proof of that. But reflecting on situations like this is how we improve, and we caught it before it cost us anything real.
The most important question before deploying any AI output isn’t “did the AI say this is right?” It’s “do we know enough to catch it when it’s wrong?” Most organizations can’t honestly answer yes.
That is the new failure loop. Push a mandate, make a mistake, take the reputational or financial hit, fix the mandate. The traditional failure loop at least happened at the individual level, where the cost of breaking something was contained. This version happens at scale, in public, and the bill is proportional.
And the mandate already went out.