Essays

The Geometry of Moats

March 2026

In 1947, Richard Hamming was waiting for his program to run.

He worked at Bell Labs, one floor below Claude Shannon and two doors down from John Tukey. The relay computers they depended on read instructions from punch cards: stiff paper rectangles with holes punched into precise positions. Each hole was a bit. Each card was a message. And the machines kept getting the messages wrong.

Hamming famously arrived on a Monday morning to find that his differential equations had failed only a few minutes into a run that had started on Friday night. The machine had detected the error and blinked its red light into an empty room. The operator had already left. The result was worse than a crash. The machine would often continue, producing an answer that looked authoritative but was simply wrong. Confident and incorrect. You would not know until you had wasted an entire weekend of compute time and gone back to check the results by hand.

Hamming later said: “Damn it, if the machine can detect an error, why can’t it locate the position of the error and correct it?”

That frustration led him to one of the most elegant ideas in mathematics.

The key was to stop thinking of a message as text and start thinking of it as a point in space.

A punch card column was a binary choice — hole or no hole, 0 or 1. Each column added another choice. Two columns, two choices — and four possible messages: 00, 01, 10, 11. Now draw a square and put one value at each corner. Every single-bit error — flipping one 0 to a 1, or one 1 to a 0 — is a move along one edge of that square. The message doesn’t just change. It moves.

Three columns, three choices — eight possible messages. Eight corners: that’s a cube. Seven columns give you 128 possible messages, each sitting at one vertex of a shape in seven dimensions. You cannot draw it, but the math works the same way: every error is a step along an edge, and every step has a direction.

This mattered because once Hamming could see messages as points in geometric space, he could arrange them so that valid messages were far apart from each other and errors revealed themselves by proximity. A corrupted message simply moves to the nearest valid point — like a marble rolling to the lowest point in a bowl.

Hamming’s discovery was not just a trick for early computers. It was a way of seeing. Problems that look impossible when flattened into two dimensions often become solvable once you allow more dimensions into the picture. Perspective changes, and hidden structure appears.

The same thing happens when you flatten a company.

I was at a conference last week where everyone was handed a badge with a blank space: “What I’m interested in…” Almost every one said the same thing. Agents. OpenClaw. AI. People wore their AI enthusiasm like a lapel pin. The energy was bright. But behind the badges, investors were nervous. The word I kept hearing was compression. Companies that once looked differentiated now looked uncomfortably close to one another. General-purpose AI models are improving so fast that every startup in every portfolio seems one API call away from irrelevance. Why fund a vertical software company if a frontier model could ship the same feature next quarter?

I have heard versions of this fear at three events in the past month alone. The concern is understandable, but the model behind it is incomplete. It makes perfect sense if you evaluate companies the way most investors do: across two or three dimensions.

Market size. Team pedigree. Technical moat.

In that space, almost every startup does look dangerously close to the point labelled “foundation model.” A new capability from Anthropic or OpenAI, and what once looked differentiated can suddenly look uncomfortably close to everything else.

But companies do not live in three-dimensional space. They never have.

Consider a company that builds software for hospital procurement teams in Germany. On a flat map of “software” and “healthcare,” it looks trivially close to what a general model could do. But then add the dimensions the map ignores: the way German hospital procurement law differs across federal rules, Länder-level regulation, and EU directives; the three years of integrations with legacy systems built on top of older legacy systems, each one load-bearing in ways nobody fully documented; the institutional trust earned by showing up in Düsseldorf fourteen times before a single contract was signed; and the fact that the head of purchasing at a mid-sized Klinikum does not evaluate software the way a San Francisco product manager evaluates software, and never will.

Or consider a company building software for maintenance and parts planning across UK manufacturers. On a flat map of “AI” and “operations,” it looks easy to replicate. But then add the dimensions the map ignores: how plants actually run, how procurement decisions get made, supplier relationships built over years, ERP integrations no one wants to touch, the local habits of factory managers, union realities, audit requirements, and the cost of downtime if the system gets it wrong. In two dimensions, it looks generic. In reality, it is embedded in a very particular terrain.

You can already see this dynamic in real companies. Legora, for example, may look from a distance like “AI for lawyers.” But its value does not come from simply exposing a model to legal text. It comes from fitting into the actual terrain of legal work: precedent, document systems like iManage and SharePoint, and practice-specific workflows across litigation, M&A, tax, and banking. On a flat map, it looks close to the model. In a higher-dimensional one, it is building something the model alone does not provide.

In three dimensions, these companies can look fragile. In twelve, they are far harder to reach.

This is Hamming’s insight, applied not to punch cards but to portfolios. The distance between two points depends entirely on how many dimensions you are measuring. Investors who see everything collapsing toward a single point — “AI does this now” — are not wrong about what they see. They are wrong about the space they are seeing it in. They are projecting a twelve-dimensional landscape onto a two-dimensional map and then panicking because everything looks close together.

While thinking about this, I was reminded of something I read in On the Garden Against Citrini. Will Mandis describes the French formal gardens at Versailles as geometry imposed onto land. André Le Nôtre moved enormous quantities of earth to force the terrain to match the design. When the land resisted, the land lost. From above, the result is spectacular. But it only remains that way because hundreds of gardeners continue trimming, cutting, and correcting, preventing nature from reasserting itself.

For a long time, software companies worked in a similar way. Traditional SaaS was largely deterministic. The product defined the workflow, and companies were expected to adapt themselves to it. The categories were relatively legible. HubSpot for SMEs. Salesforce for enterprises. You could identify the buyer, define the process, and apply the software onto the organization like geometry onto terrain. The software imposed order, and the company bent to fit it.

AI changes this relationship.

AI is probabilistic, not deterministic. The same underlying model can be used for customer support, legal review, fraud detection, logistics planning, coding, tutoring, procurement, and a hundred other workflows. The same model does not produce the same product in practice. An engineer using Claude Code with a carefully structured prompt, a PM using it to frame a product spec, and a marketer using it to draft campaign ideas are all interacting with the same underlying capability, but they are effectively operating in different universes. The tool is the same. The workflow, expectations, and value created are not. Traditional software imposed itself onto messy organizations. AI does the opposite: it adapts to the terrain.

That is why thinking in more dimensions matters even more now. Once the same model can flow across many use cases, the important differentiators move elsewhere. Into regulation. Into trust. Into distribution. Into data rights, operational history, embedded workflows, local norms, internal champions, decision cycles, edge cases, and institutional memory. AI collapses dimensions related to cognition. But most durable companies operate across dimensions that have nothing to do with cognition.

The best companies I have seen are not Versailles. They are not neat designs imposed onto flat ground. They are shaped around terrain that already exists: the regulatory contours, the cultural drainage patterns, the institutional root systems that have been growing for decades. A general model can flatten two dimensions overnight. It cannot flatten twelve simultaneously, because most of those dimensions are not about raw intelligence. They are about presence, time, trust, and the particular strangeness of a particular place.

Which dimension of your company cannot be compressed into an API call?

That is the one to reinforce.