← Writing

Why Good AI Systems Still Don't Get Used

Most AI systems do not fail because the model is wrong. They fail because they never become part of how decisions are actually made.


Most AI systems don't fail because the model is wrong.

They fail because no one actually uses them.

This is a people problem, not a technical one. A model can be accurate. The data can be clean. The system can be well engineered. On paper, everything works.

And yet the system never becomes part of how decisions are actually made.

I have seen this pattern repeatedly.

A team builds something promising. The early results look good. There is a sense that the hard part is done. But over time, usage declines. The system gets bypassed, or used only in limited cases, or quietly abandoned.

Not because it failed.

Because it never really fit.

The gap is not in the prediction. It is in how the system meets the decision.

In most organizations, decisions already have a structure. They happen at specific moments, with specific owners, under specific constraints. There are existing workflows, informal habits, and implicit rules about how things get done.

If a system does not fit into that structure, it creates friction.

The output may arrive too late to be useful.
Or too early to be trusted.
Or in a format that does not match how the decision is actually made.

Sometimes the person expected to use the system is not the person accountable for the outcome. Sometimes using the system introduces risk without clear upside. Sometimes the system asks people to change behavior without changing the environment around them.

In those cases, people adapt in predictable ways.

They ignore it.
They work around it.
They use it once and go back to what they trust.

From a distance, it looks like a technical problem. In practice, it is usually a problem of ownership, timing, and incentives.

Accuracy matters. But in many real systems, it is not the binding constraint.

Adoption is.

And adoption depends on things that are harder to measure.

Who owns the decision.
When the output shows up.
What happens when the system is wrong.
Whether using it makes someone's job easier or riskier.

If those do not align, the system does not matter.

It does not matter how good the model is.

This is one of the reasons production systems are harder than they appear. The challenge is not just to build something that works. It is to build something that fits into how decisions are actually made, and makes those decisions easier, safer, or better.

That is a different kind of problem.

And it is where many AI systems quietly fail.