Most Real-World Problems Are Decision Problems
Why the future of AI is less about single answers and more about helping people navigate sequences of decisions over time.
Writing
I write about AI, decision systems, leadership, parenting, and long-term thinking — sometimes separately, often interleaved.
Why the future of AI is less about single answers and more about helping people navigate sequences of decisions over time.
The bottleneck in AI is rarely the algorithm. It's everything around it, the data, the workflows, the ownership, the incentives. Getting something to actually work in production is still surprisingly hard.
In production environments, the hardest problems are rarely the models themselves. The real challenge is building systems that can support real decisions over time.
What it takes to build AI products for real workflows, messy context, and everyday use.
After years of building AI in production, I stopped starting from the model and started starting from the decision. That shift changed how I approach every system I build.
Real personalization is not about recommendations. It is about holding context over time, earning trust, and building systems people are comfortable relying on.
Why trust, incentives, and system design matter more than models when building software people rely on for real decisions.
Most travel software is built to answer queries. Real trip planning is a sequence of decisions. That gap is what made it interesting enough to build something around.
Real estate is not a search problem. It is a lifecycle of connected decisions. The software has not caught up to that yet.
The hardest problems in enterprise AI are almost never technical. After years leading data science and AI teams at scale, the biggest lessons came from people, incentives, and how decisions actually get made.
On direction, deliberate choices, and why even with a plan, the path rarely behaves the way you expect.
On identity, expectations, and the pressure to fit into a single definition of success.
How becoming a parent changed how I think about time, risk, ambition, leadership, and what it means to build a life alongside serious work.
On choosing proximity to change, and why building started to feel like the only way to stay honest to the work.
Most AI systems do not fail because the model is wrong. They fail because they never become part of how decisions are actually made.
The next meaningful shift in AI is not just better answers. It is software that can carry context forward as decisions evolve.
Most AI failures in organizations are not model failures. They are condition failures. The capability problem is largely solved. The bottleneck is whether the organization is set up to use AI, and most companies are still asking the wrong question.
Most discussions about data in AI focus on quality. The harder problem is ownership: who controls the data, who is accountable when it is wrong, and what happens when two teams disagree about which version is right.
Accuracy is a property of the model. Impact is a property of the system. Most evaluation is designed for the first one. In production, only the second one matters, and it is much harder to measure.
The questions I have learned to ask before building anything, and why starting with the model is almost always the wrong place to start.
Most AI initiatives fail before they fail. The outcome is largely determined in the first month, by framing decisions on scope, accountability, data, and evaluation that most leaders make too quickly or not at all.
The way I built AI systems two years ago no longer works. Not because I was doing it wrong then, but because the underlying technology changed in ways that made the old approach a liability.
A lot of ideas in AI look straightforward until you try to build them. The failures aren't model failures. They're system failures, and they show up in specific, concrete ways.