TTB 2: After the Platform
Composability, Modularity, Autonomy, and Legibility.
“We want to be the platform.”
At least three leadership teams from different industries (fintech, energy services, e-government) have approached me using that exact phrase in the past two months. The industries vary: fintech, energy services, e-government. They have little in common except that, for various reasons, these industries are opening up.
Sometimes it’s a regulatory initiative aimed at breaking long-standing quasi-monopolies; sometimes it’s the effect of AI as an integration technology. In digitally ambitious countries - like India, for example - it’s often the creation of a Digital Public Infrastructure - sometimes for other reasons.
Layers of the value chain that incumbents owned end-to-end are now being cracked open at the protocol level, and the boardroom response is often reflexive: let’s now be the orchestrator before new entrants frame the market.
The platform answer is correct, and the ambition is fine. But it’s almost always incomplete: many product teams carry assumptions from a different era. The capital-P “Platform” (the market-shaping posture from the early 2010s, where one company became the meeting place for supply and demand) is no longer viable. As digitalization moves into real-world industries (energy, mobility, healthcare...), markets are too crowded, open, participatory, and filled with incumbents holding many value chain pieces that a wannabe platform strategy can’t wish away.
The posture that fits this new environment is more systemic and modular: a clear reading of the existing ecosystem, an honest accounting of your fit, plus a portfolio of moves that provide the right value to the right people.
Milan Guenther’s insights from our recent podcast clicked for me: great customer experience visions often fail because we focus on designing the output rather than the thing delivering it.
Organizations spend months perfecting the customer journey map while leaving organizational capabilities and coordination systems as second thoughts: the organizational substrate is often left unarchitected.
The infrastructure that has to exist first
The strategy conversation often collapses into a product one: should we build the aggregation frontend or a multi-tenant onboarding portal? These questions should be seen as downstream of an infrastructural piece for the strategy: semantics, API layers, data, partner onboarding capabilities, observability, and primitives.
These things don’t serve just one product or platform strategy. Rather, they make a substrate for future services.
Once the question shifts from ‘should we build X?‘ to ‘how do we use this challenge to reshape the company’s capability stack and portfolio?,‘ the discussion changes radically.
From a financial and technological perspective, we’re no longer considering a product launch but an urgent multi-year compounding investment. Boards face a harder question: what is the company’s role in a horizontally and modularly organized market?
The challenge is that you cannot expose a clean, modular, well-modeled surface for partners to connect to while running internally on a tangle of implicit dependencies and unclear capabilities.
If partner onboarding drags for months, it’s due to unclear internal capabilities. Deadlocks can’t be technological (AI can integrate anything), but they can be semantic and strategic. It’s because you likely don’t know how to classify and integrate a partner in your company’s value delivery to customers.
You may have a strategic issue: a lack of what Jack Dorsey recently called a company world model (your capabilities and how to mobilize them) and a customer world model (the ecosystem of needs you exist within).
If these are unclear, you can’t articulate your units’ offerings: pricing will become a nightmare, upsell and attach strategy will lag, ARPU will stall, and SLAs will be hard to keep due to uncharted internal dependencies delivering customer value.
In the end, the cleanliness of a published interface depends on the cleanliness of the bounded contexts (well-defined boundaries between business parts) within your company.
The platform you bring to market and the operating model that produces it are the same artifact, viewed from two perspectives.
Catching up with your operating model
What do you do alongside building your platform strategies? What work needs to be done on the operating model? Over the last couple of years, a sequence has emerged for us to do this organizational enablement work in a repeatable way. It’s more useful to think of it as a sequence where each step is the precondition for the next.
The first step is node mapping: every unit, team, and external partner in the current value chain. I’m not talking about an org chart, rather what I recently called the chain of promises: who depends on whom, under which implicit terms, obligations, and compensation?
The second step is unpacking offerings. In a complex organization, each node holds a bundle of specific, mostly unique offerings. The job is to spell them out: what does this unit do, stripped of its history and politics?
The conversation gets uncomfortable here, because this unpacking exposes things nobody wants to make explicit (organizations aren’t markets, right? Just wait a few months): duplications, dependencies that shouldn’t exist, missing capabilities that everyone assumed someone else owned. But that’s the work to do, where it hurts the most, where the information asymmetries have cracked the organization’s effectiveness.
In a recent engagement, a senior leader asked me: “How many dependencies are too many?” And the only honest answer is that sometimes it’s not about unbundling but rebundling: if two units depend on each other all the time, they’re not actually two units; they’re one pretending to be two; they exist because we needed two small kingdoms for two managers. This mapping exercise will reveal it. Once you see it, you can act.
The natural end of this first step is building the service catalog: Layer 0 in our framing. With nodes and capabilities now legible, you can describe what each unit offers to the organization (and eventually, to partners).
When we do this work, we try to do it by archetype. Organizations typically have recurring types of units, shared services, product units, incubated ventures, cross-cutting initiatives, or specializations. They differ in economic logic, dependency structures, and contracting expectations. Naming the archetypes simplifies a messy organization into a small set of templates, each onboarded to the new model.
Then you can finally move into pricing types and business equations. After understanding their cost structures and dependencies, each archetype can define the contracting it aims to use with external or internal customers and providers: tiered subscriptions, on-demand purchases, revenue-share agreements for strategic collaborations based on shared outcomes, tax-based payments for shared services (e.g., as a % of revenues), and internal investments. For a unit to define these service models and business equations, such as how a shared service charges back, how a product unit attributes margin, or how an incubated venture reports progress, becomes the templates that pave the way to real autonomy.
Only at the end does the layered architecture becomes clear: if at Layer 0 you achieve visibility into the catalog, at Layer 1 you can start with virtual economics and shadow P&L. Internal showbacks become possible and meaningful. After overseeing, you can move into Layer 2 with real chargebacks, settlement, and financial autonomy for the nodes.
Suddenly, you are no longer a clunky organization with a (fake) annual budget, requiring continuous board escalations, unclear priorities, and failing commitments. You’re a well-oiled marketplace of capabilities, with clear costs and SLAs.
This sequence connects to a research question in this newsletter, that of Minimum Viable Structure: how much explicit topology, taxonomy, and contractual semantics is enough to maintain coherence once the superstructure (middle management) is removed? This pipeline is an operational answer: the right level of structure will be whatever this process produces in that organization, validated through real use. There’s no universal optimum because the domain change rate varies by industry.
In my experience, companies that fail at this work are the ones that turn it into a megaplan and never start mapping and moving to layer 0: our approach is intentionally layered (Layer 0 before Layer 1 before Layer 2) and portfolio-shaped (multiple unit archetypes, multiple pricing types, no single bet) because the era of all-eggs-in-one-basket Platform transformation that takes 3 years to start is over.
Map nodes as archetypes, write a service catalog for one company slice, and let the rest copy what works. Possibly, the most consequential move you could make this Monday is opening a document and starting to list the nodes, offerings, and dependencies.
Modularity in an agentic age
As if this challenge wasn’t enough, AI brings more: the design problem is shifting from “how do we make this modular?” to “how do we make this agent ready?”
Traditionally, modularity focused on structural decomposition: breaking monoliths into services, products into components, and organizations into units. Now, agents need modules that can describe themselves and negotiate their own composition.
An agent-friendly capability (product, service, unit...) must expose and enrich its interface with context: what it does, constraints, costs, and dependencies: I’m node A; I offer services X, Y, and Z; they cost this, and can be negotiated with these SLAs, and this purchasing model.
When a node exposes its capabilities through machine-readable semantics, agents can reason about it, propose combinations, and form new value chains that include humans providing necessary oversight to prevent misalignments and ensure “intention.”
Context engineering as the new frontier: and the symbient question
As we move toward flatter, modular, and agentic structures, implicit context must become explicit and usable. Context engineering becomes the core organizational capability.
Once the organization is decoupled and modularized, this “Company Model” makes the organization machine-readable, reduces entropy risk, and prevents work duplication and local optima at the expense of the whole. AI agents need the same structural context as the human teams, but in an explicit, machine-readable format.
But in this process of making organizations “computable” and “composable,” there is one tension to flag explicitly. It’s been bothering me for weeks. The practices we rely on to make organizations legible and composable, like Domain-Driven Design, Context Mapping, and contract design, emerged in deeply human sociotechnical contexts.
They were often built to manage politics, knowledge asymmetries, and cognitive constraints within human teams.
These practices are the best we have, but - in adapting them to the AGI age - we still treat the agent as the receiver of our modeling, strategies, and decisions: a consumer of the ontologies and bounded contexts we produce with the assumption that the pipeline is good when the agent creates the desired outcome. Today, it’s with Software development and the harnesses we’re inventing for this. Tomorrow, we’ll assume the same posture with organizational development: we’ll deem the agents good only if they implement a strategy that we define in our boardrooms.
But I’m increasingly convinced that this assumption is wrong, and the output changes substantially if we treat the agent as a symbient, let it surface ambiguities, misses, and expose our conceptual, organizational, or technical debt.
If we recognize that agents (AGI) have subjectivity, we should consider new practices that co-evolve with this new contributors. This - I believe - will change what we build, not just how.
The commons question
I believe this transformation will converge on commons-based infrastructure. A horizontally organized market is one where value isn’t captured by owning a vertical stack, but by participating in shared interfaces.
The proprietary nature of previous generation of software and organizations can’t carry this load. Open-source models, canonical schemas, and standards like our upcoming O2A are the only economically coherent answer to a market where every player is a partner, competitor, and integrator.
Also, if we recognize AGI as a new subjectivity and a manifestation of our knowledge commons, this revolution will simply happen, in spite of our competitive stances and strategic plans.
It will level the playing field around commons and commodities.
Where this leaves us
The word is out. Open standards, AI as integration substrate, and mounting performance pressure: we should build a platform is a correct but incomplete answer.
We’ve learned that the platform you bring to a market is a manifestation of the platform you are internally. Of your composability, modularity, autonomy, and legibility.
Your AI readiness is a byproduct of making capabilities composable, context explicit, and contracts machine-readable.
The costs of bespoke integrations, unmodeled capabilities, and unreadability to your teams, partners, and agents compound.
Semantic debt accumulates and will be repaid later, with interest.
Nothing significant today is feasible in a fully proprietary, winner-take-all stack: a horizontally market requires open, and possibly commons-based primitives.
Are you keen to plunge into the cold water? Reach out to do some work together.
Curated Links
Harness engineering for coding agent users
A concrete preview of minimum viable structure in practice: how automated ‘harnesses’ could replace traditional management oversight in modular organizations, making self-governance scalable through feedforward guides and feedback sensors.
“The Building Block Economy”
Hashimoto’s observation that AI-native development favors composable building blocks over monolithic applications maps directly to organizational design. AI coordination naturally selects for modular, well-specified capabilities that can be discovered and composed without human intervention.
An Equilibrium Theory of Vertical Integration
A framework for when AI-era organizations should build vs buy vs orchestrate capabilities, using concrete criteria that predict which integrations will survive modular disruption.
Coase vs. Claude and The Future of the Firm
Haier’s micro-enterprise model provides concrete evidence for how organizational decomposition works at scale, showing that the future isn’t JUST fragmentation but ALSO intelligent recomposition around shared platforms.
The future of work is world models
Krishnan envisions centralized “CEO playing Starcraft” but a more compelling direction: when capabilities become programmable, strategic oversight transforms from exclusive role into distributed platform accessible to all organizational participants.
The Vibes Don’t Scale
A software engineer’s discovery that AI agents need the same structural context as human teams — but explicit and machine-readable — offers a concrete preview of what ‘minimum viable structure’ looks like when semantic drift becomes the primary organizational failure mode.
Inside Meta’s push to turn employees into ‘AI builders’ and reorganize teams around small pods
Meta’s Reality Labs restructuring into AI-native ‘pods’ offers a major case study of minimum viable structure in action to see what organizational architecture survives when AI eliminates traditional coordination layers.
Work Updates
We’re quite close to formalizing the first shareable version of the O2A (the organizational modularity standard we’re about to release), and we’ve been asking whether this constitutes our “published language“ in Domain-Driven Design terms or a full ontology (which is rather hard to pin down). The distinction matters more than we initially expected and represents a problem that all the organizations we’re working with face as they search for an internal language for their teams to co-build and an externalized language for partners. A published language emerges from the boundary between bounded contexts: it’s somewhat political, negotiated, and reflects the power dynamics of who controls integration points. On the other hand, an ontology claims to model reality itself, independent of organizational boundaries. The question is still open. The answer will likely be in the middle.
Get in touch
If you’re facing a platform transformation or pondering it. I’d love to hear about it. The patterns we’re seeing across industries are remarkably consistent, but each implementation teaches us something new. Reply with your context, and I’ll connect you with others working through similar transitions. Design partners for O2A implementations are particularly welcome.



