Writing

Thinking out loud, carefully.

I write about AI, decision systems, leadership, parenting, and long-term thinking — sometimes separately, often interleaved.

Production Has Opinions

The bottleneck in AI is rarely the algorithm. It's everything around it, the data, the workflows, the ownership, the incentives. Getting something to actually work in production is still surprisingly hard.

Why Good Systems Are Harder Than Good Models

In production environments, the hardest problems are rarely the models themselves. The real challenge is building systems that can support real decisions over time.

Starting from the Decision

After years of building AI in production, I stopped starting from the model and started starting from the decision. That shift changed how I approach every system I build.

What I Learned Leading Large AI Teams

The hardest problems in enterprise AI are almost never technical. After years leading data science and AI teams at scale, the biggest lessons came from people, incentives, and how decisions actually get made.

What Changed When I Became a Parent

How becoming a parent changed how I think about time, risk, ambition, leadership, and what it means to build a life alongside serious work.

Creating the Conditions for AI to Work

Most AI failures in organizations are not model failures. They are condition failures. The capability problem is largely solved. The bottleneck is whether the organization is set up to use AI, and most companies are still asking the wrong question.

The Data Problem No One Wants to Talk About

Most discussions about data in AI focus on quality. The harder problem is ownership: who controls the data, who is accountable when it is wrong, and what happens when two teams disagree about which version is right.

Evaluation Is the Hardest Part

Accuracy is a property of the model. Impact is a property of the system. Most evaluation is designed for the first one. In production, only the second one matters, and it is much harder to measure.

How I Think About Building AI Systems

The questions I have learned to ask before building anything, and why starting with the model is almost always the wrong place to start.

What I Would Do If I Were Leading an AI Initiative Today

Most AI initiatives fail before they fail. The outcome is largely determined in the first month, by framing decisions on scope, accountability, data, and evaluation that most leaders make too quickly or not at all.

Why I Had to Change How I Build AI Systems

The way I built AI systems two years ago no longer works. Not because I was doing it wrong then, but because the underlying technology changed in ways that made the old approach a liability.

What Breaks When You Actually Try to Build This

A lot of ideas in AI look straightforward until you try to build them. The failures aren't model failures. They're system failures, and they show up in specific, concrete ways.