TTB 1: What is an Organization today?
Introducing "Through The Boundary"
In 1937, Coase asked, “Why do firms exist?”
His answer was that transaction costs make internal coordination cheaper than market negotiation. It earned him a Nobel Prize and marked the start of seventy years of applying such a theory to observe organizational development.
Ninety years and several S-curves later, we lack a usable language for describing what an organization is as an operable system.
We have ERP for resources, CRM for relationships, PM tools for tasks, and OKR frameworks for alignment. We have org charts (often describing reporting, not actual work), process maps (describing flows, not structure), and strategy decks (focusing on aspirations). None of these tells you the essential elements of an organization and their relationships.
For decades, the gap has been tolerable. Organizations compensated with layers of middle managers to carry the missing context - who does what, who owes what to whom, which products depend on which capabilities, and the constraints. The knowledge lived in people’s heads, and that was sufficient.
It’s no longer sufficient. When AI becomes the coordination technology, agents need to understand the commitment structure, dependency map, and capability boundaries. Without explicit organizational semantics, AI becomes an entropy multiplier.
Agents cannot consume the implicit context that kept organizations together for a century.
I’ve been writing for almost 20 years, leading to the Platform Design Toolkit and founding Boundaryless. I’ve been quiet for a few months, reorganizing my research questions and the work we do at Boundaryless. I feel we are passing through a boundary: between the organization we inherited and whatever comes next. So I’m writing as we cross it.
This newsletter — Through the Boundary — feels like the beginning of a new exploration, but one my work was designed for. It’s a working thesis developed in public, with genuine open questions.
The Organization, as I understand it, is four intertwined things: a topology, a taxonomy, a shared context to be engineered, and a chain of promises and dependencies.
If you strip an organization to its operative essence - below the strategy decks, org charts, and mission statements: what remains? After years of work with organizations across industries and scales, healthcare, industrial measurement, financial services, urban mobility, enterprise software,….I’ve come to think the answer is four deeply intertwined things.
The first is a topology. Who does what? The nodes, teams, units, their boundaries, domains of responsibility, and archetypes.
James D. Thompson’s 1967 Organizations in Action established a key insight in organizational theory: there’s no universally optimal organizational structure. Structure is contingent - it depends on work type, technology, and interdependencies. The implication is that topology isn’t a one-time design choice but a variable that must adapt as work, technology, and interdependencies change. In the same year, Mel Conway observed that a system’s structure mirrors the communication structure of the organization that builds it. Conway’s Law has since been validated in software engineering, and its corollary is: if your products are to be modular and composable - especially with a new coordination technology - your organization must be modular and composable.
Functional hierarchies organized by discipline rather than value produce monolithic systems and thinking. It’s too late for functional structures. The current reality demands node-based topologies: small, domain-aligned units with clear boundaries, simple to reconfigure as work changes. Simon Wardley built a mapping practice around the idea that components evolve through maturity stages and their position in the value chain determines strategy.
These perspectives converge: to understand an organization, you need to know the pieces, their dependencies, and how that structure maps onto the architecture of their output.
The second is a taxonomy. What the organization’s products are and their relationships.
A product (or service) is seldom isolated: it often runs on, extends, bundles, or componentizes into something.
These structural relationships encode the logic of portfolio coherence. A consulting service that repeatedly solves the same client problem is a candidate for productization; a software module that every integration partner needs is a candidate for platformization. Wardley describes this maturity journey as the ILC cycle: Innovate (bespoke, client-specific) → Leverage (reusable, packaged) → Componentize (standard, self-service).
Without an explicit taxonomy, the organization cannot see how the asset system is progressing; the portfolio becomes unclear to itself and any agent that could help recompose it.
The third is a shared context. The representation of the organization’s operational world, including ecosystem relationships, client problems, and domain language: the rationale behind the objectives.
This is the ontology of the value the organization produces: what are the customer needs? Who are the partners and how do they participate? It’s what traditionally lives in the heads of experienced people. The mythical domain experts, and it’s where strategic intent gives purpose and direction to the organization’s capabilities.
I often face these challenges: recently, we ran a context-mapping exercise with a large industrial client with several teams building interconnected software products. We discovered no shared understanding of dependencies across teams. Each team could describe its domain precisely, but the overlap logic was invisible, carried informally, or duplicated and misaligned. The gap was in context awareness: no shared representation of how the pieces fit together existed. The same gap is in every organization we’ve worked with recently, from industrial data measurement to financial services: the context remains implicit. It’s an institutional memory (often at the top), not a queryable artifact. This problem is evolving today as GenAI enters and transforms the space.
This is what I believe Dorsey recently called a “customer world model” in his seminal piece, “From Hierarchy to Intelligence.” In my experience, this is not a single unified language but multiple linguistic layers coexisting, each serving a different purpose within a bounded context. In these small bubbles, semantic rigor is essential, and AI can support prototyping, operations, and development. Then there is a more inter-contextual space, where language should focus on navigation and interconnectivity to enable longer, complex inter-domain workflows as stakeholders move across systems.
These ontologies shouldn’t be produced top-down with single-document specifications. Even if coherent, these upfront artifacts wouldn’t endure real-world revision.
Anyone working in agenting programming - a precursor of agentic organizing - knows that context engineering, “the work of selecting, structuring, and continuously updating the information that reaches an AI model when it performs a task”, is essential to get the agents to do what we desire.
After all, we had decades of experience in domain modeling, from Eric Evans’s DDD work in 2003 to Brandolini’s Event Storming practice, we’ve been converging on this lesson: ontological assumptions specified before operational validation are expensive to maintain. Making context usable requires it to be lightweight and validated through use. Add the rapid evolution of agentic capability, and now you’re required to progressively graduate the patterns through which users experience your solutions.
The guiding principle could be called Minimum Viable Semantics:
First, model only what’s necessary to avoid ambiguity at the boundary between bounded contexts (for navigation, inter-domain consistency…).
Then, validate everything else through practical use.
Graduate to standard/specification only when the pattern is consistent.
The “customer world model” needs a connection with the organizational layer and the ontology of the organization’s capabilities: what are the units, components, service catalogs, and recombination rules that define what the organization can do and how its Lego blocks can be reconfigured. Dorsey calls it the “company world model”; we call it topology and taxonomy.
The company world model is how the company understands itself and its own operations, performance, and priorities, replacing the information that used to flow through layers of management. The customer world model is the per-customer, per-merchant, per-market representation …
From Hierarchy to Intelligence - Jack Dorsey
When these two dimensions are explicit and coherent, an organization becomes far more able to perceive change in its environment and meaningfully reconfigure itself in response.
The fourth factor is the chains of dependencies and promises. In this context, a customer outcome results from multiple nodes making and keeping commitments to one another - held together by contracts, explicit or implicit, not by hierarchy. Mark Burgess’s promise theory frames this precisely: value delivery is interdependent, and each node must have commitment-making agency. Nodes can fail, and robustness requires alternatives rather than escalation. When structuring multi-unit contract architectures for clients running complex multi-entity operations, the contracts are not mere administrative instruments: they are alignment and coordination mechanisms that encode who owed what to whom, under what conditions, and with what consequences for success and failure.
“Contracts over budgets” shouldn’t be seen as an economic efficiency argument but as a structural prerequisite for composability. They dictate the conditions for units to recombine without asking a central authority. This changes the coordination idea: from imposed by hierarchy to emerging from the network of commitments. Even if GenAI drives the recombination as in Dorsey’s vision, the rules, interfaces, and contracts needed to mobilize capabilities need to be clear and available; we’re just prototyping a situation where AIs enact organizational changes without any observability, otherwise and where humans are mere flesh-made cogs operated by an intangible intelligence.
When these four elements - topology, taxonomy, shared context, chains of promises - are explicit and operable, the organization can adapt. Any participant, whether a team, partner, or AI agent, can understand the structure, propose new combinations, and negotiate participation. When they are implicit and dispersed, the organization is fragile. AI makes it more fragile, faster, because it amplifies the consequences of semantic drift at machine speed.
If these four elements fundamentally define an organization, what does AI actually change?
Structure and superstructure
I think AI doesn’t change an organization’s key elements; rather, it shifts the balance between two things.
Something we could call superstructure, which consists of (often hierarchical) management, (often bureaucratic) processes, and middle management as context translators: this is what AI eliminates the need for. Agents can coordinate and translate context better than humans, manage information, and surface the right data at the right moment. All functions that previously required human intermediaries.
The structure, on the other hand, in the form of unit topology, product taxonomy, contract types, and shared semantics on the value that the organization produces, the skeleton itself… is made even more necessary by AI because it is the foundation for agents to operate predictably, explainably, and observably. An agent managing a contract needs to know the existing rules, nodes, and terms. An agent composing new service offerings needs to know available capabilities, unmet customer needs, dependent offerings, and viable combinations. Without this foundation, agents drift and hallucinate.
Contemporary software systems are not built by writing code, but by engineering contexts that produce correct code. I believe organizations will be built similarly, not by managing people, but by building shared contexts (organizational setups, constraints, and semantic models) that produce coherent coordination. If this holds - and the work with clients and on our products suggests it does - then the understanding of the organizational context and the engineering of it is not a support discipline but synonymous with organizing itself: if an organization is a system of people and agents converging around shared problems, then the representation of those problems, structures, and procedures is the organization’s core.
Six unanswered questions
As I launch this newsletter, I want to be transparent about what I don’t know: these are the research questions guiding my work, conversations, and product development at Boundaryless.
Each newsletter issue will explore a facet of one or more of these.
What is the minimum structure an organization needs? AI eliminates the need for superstructure but makes structure more necessary. How much is enough? Too much recreates bureaucracy in semantic form with rigid ontologies that nobody maintains. Too little, and teams and agents drift without shared reference, optimizing for their own local context. Each layer of explicit structure has a production and maintenance cost: formalizing a domain model is not a one-time investment but a continuous curation effort, because context rots as domains evolve. This means the optimal level of structure depends on the rate of domain change. Stable domains (e.g., organizational setup, contracting) tolerate more formalization because context degrades slowly. Fast-evolving domains (e.g., AI tooling, market needs, novel products) tolerate less because context degrades before it can be amortized. Finding the curve of cost versus benefit - and understanding how it varies by domain velocity - is a key design question for the next decade.
On which layers does AI favor consolidation, and on which does it favor distribution? I originally thought AI would further fragment the market: if a technology makes coordination easier, why wouldn’t it reduce the minimum efficient scale of independent capabilities in the firm? Coasean logic suggests when transaction costs fall, firms shrink as more activity moves to markets. But this framing (centralization versus decentralization, consolidation versus fragmentation) treats these as competing forces on the same axis. I think the picture is more interesting: AI makes coordination cheaper unevenly, dramatically reducing the cost of coordinating codifiable, routine knowledge, but creating contextual, domain-specific knowledge usable by AI (context engineering) remains a costly, human-intensive activity. That cost doesn’t go to zero as models improve; it regenerates at each new AI capability level, because each capability opens new use cases requiring context work. The implication is a dual motion: centralization succeeds at the infrastructure and coordination layer (shared processes, codified knowledge, standard operations) while distribution persists at the application layer (domain-specific context, customer proximity, local judgment) - which makes the case for organizations to build a dual platform-portfolio motion as I explained months ago. And there is a temporal dimension: context rots over time, limiting centralization at scale. Beyond a point, the cost of maintaining centralized context coherence grows faster than the advantage it delivers. The new boundary between firm and market will probably depend on where the boundary of a context that is relevant enough to attract and cohere, and not too expensive to maintain, sits.
Does context engineering become ecosystem engineering? If context is easy to create, what makes it valuable? The current hypothesis: the real attractor is continuous context engineering capability. In the deterministic world, ecosystems were enabled by APIs, rigid, typed interfaces. In the agentic world, ecosystems can be enabled by what I call context interfaces: semantic, non-deterministic surfaces for agents and humans to build adaptively. If this is true, context engineering evolves into ecosystem engineering, and the platform theory changes: an API-based platform has network effects tied to integrations; a context-based platform has network effects tied to shared semantic richness AND semantic shareability, but only if that richness is maintained.
How do you design for failure in a composable world? In an organization built on dynamically connected chains of promises, node failure is the norm. When a capability node in these chains cannot deliver on its commitment, the entire value chain is at risk. Can the resilience patterns of distributed systems (circuit breakers, fallback, death and rebirth) apply to organizational design? In a hierarchical organization, when something fails, management intervenes. In a composable organization based on contract chains, who intervenes when a promise breaks? Nodes can fail obviously (failing to deliver) or swiftly, delivering something that looks correct but is strategically misaligned (it’s easy to be productive!): the organizational equivalent of an AI hallucination: confident, plausible, nice…but pointless.
What should be common and what proprietary? AI needs shared context to coordinate, and shared context generates semantic network effects: the more actors adopt a common language, the more useful it becomes, and the more effective AI is at composing across boundaries. Ontological elements like languages, taxonomies, and shared models are non-rival goods (they do not “consume” and gain value as more people use them). This means that the productive logic of AI may itself select for the commons at least (and for now) at the language layer. The organization that keeps its schemas proprietary deprives itself of network effects; the organization that opens its grammar layer maximizes the surface for AI and the ecosystem. The open-core pattern emerges as the only viable one from this logic: open grammar (schemas, contracts, workflow primitives), proprietary engine (identity resolution, settlement, entity biography accumulation). But open grammars aren’t free to maintain: semantic commons need active governance; someone must curate, update, and prevent context rot in the shared standard. The friction may be between an open, static specification anyone can fork (“I’m just making it open so anyone can build on it”), versus a curated, maintained, actively governed specification that evolves with the domain and aggregates various stakeholders. The curation of the living grammar becomes a source of value and a form of institutionalized labor someone must pay for. Whether Ostrom’s principles for governing natural resource commons apply to semantic commons is an open question.
If the organization is its context, what’s its identity? In a world where agentic production makes everything composable and the cost of production approaches zero, identity may emerge from what you choose not to do or to do in a specific way. Intentional constraints (ethical, sustainability, domain-specific) may become the primary source of differentiation. Since the ontological layer (languages, taxonomies, shared models) is a non-rival common good, organizational identity cannot reside in the possession of that language. Anyone CAN use it, as many as possible SHOULD use it. Identity shall reside in the unique configuration an organization imposes on shared primitives, in the constraints it chooses, the trade-offs it accepts, and the patterns it prefers. But context rot introduces a temporal dimension: if identity is based on meaning (on semantic configurations), and context degrades, then identity requires continuous gardening. Stop that, and your identity drifts into incoherence, leaving no one able to articulate why the organization exists. A sustainability constraint declared in 2024 means something different in 2026 as supply chains shift, regulations evolve, and new materials emerge. The constraint must be continuously reinterpreted and reoperationalized. Your organization will always have to take a stance: what is your “signature” in that configuration? How do you make it legible (to yourself, to partners, to agents) when it is always in motion? Gosh…I’m already tired, and we haven’t even started.
What will this be
The newsletter starting today - Through the Boundary - won’t be a thought leadership exercise but more of a working journal. Each issue will tackle one of these questions - or a new one that emerges from conversations and practice. The method combines theory (organizational, economic, systems), client case studies from Boundaryless, conversations with thinkers and practitioners I’m learning from, and the development of models and tools.
If you work in an organization facing these tensions - the gap between strategy and operations, the absence of shared semantics, the fragility of implicit coordination - I want to hear from you. Which questions resonate? Which ones am I missing?
If you have a perspective on any of them, please reply or comment, as this isn’t meant to be a one-sided discussion.
Curated Links
I’ve been curating reads and podcasts for almost 15 years so this newsletter will often contain curated links:
From Hierarchy To Intelligence
Probably one of the most interesting pieces of organizational strategy we have read in the last decade, Jack Dorsey shares his vision of “intelligence over hierarchy” through Block’s future organizational model. It’s incredible, but the research conversation reveals two critical contradictions that actually strengthen rather than challenge our core thesis on executable organizations. Jack argues that behavioral data can replace explicit semantic alignment, but his description of Block’s “world models” reveals that if Block interprets transaction data, it will do so through implicit ontologies. Similarly, his claim that AI automation eliminates the need for contracts contradicts his own description of capabilities with “reliability, compliance, and performance targets,” which are precisely contractual specifications in machine-readable form.
Jobs and AI: Chains of work
The concept of humans moving from “in-the-loop” to “overseeing the loop” captures how AI forces organizations to redesign around three critical functions: chain design, process oversight, and output judgment.
Services: The New Software
AI transforms software tools into outcome-based services, exactly the shift from managing capabilities to orchestrating results that requires a new, more robust contractual infrastructure.
Context Engineering: Why Hayek’s Knowledge Problem Survives AI
Grab a cup and go through this: it provides an economic analysis of why context becomes the primary organizational bottleneck in an AI-native world, validating our thesis that programmable organizations need semantic infrastructure.
Company as Code
Clay Parker Jones (Org design at Airbnb) with a concrete technical vision for what happens when organizational design becomes executable infrastructure rather than just documentation the logical endpoint of programmable organizations.
Software Gets Personal For Organizations and Teams
Fabien Girardin and Lisa Gansky explore the organizational implications of AI, turning every employee into a software creator..
AI just gave you superpowers — now what?
Christian Catalini’s “Some Simple Economics of AGI” can help you imagine the potential of widespread AGI to eliminate management superstructures and transform employees into verifiers who need to use judgment more than productivity.
Work Updates
We’ve been experimenting with multi-unit contract structures to achieve strategic customer experience-related OKRs: the idea is that customer experiences could serve as the primary revenue stream, with the multiple units involved negotiating explicit revenue splits. This validates the chargeback approach we wrote about months ago: organizations can transition from bureaucratic to market-based coordination without breaking existing operations.
Another key learning we had recently is that to transition from a top-down yearly budgeting to distributed autonomy and P&L, a viable approach is to start with shared services & overheads first. The reason is that everyone uses them, and market pricing references exist. Customer-facing experiences with clear revenue attribution can help bridge the gap between bottom-up and top-down approaches.
We’re also discovering that identifying dependencies and service types often comes before implementing chargebacks: organizations need visibility before they can price internally.
Get in touch
If you work in an organization that can’t reliably answer questions about roles, responsibilities, agreements, or where the unit struggles to commit, and most services rely on personal relationships, or where product dependencies are unknown, and it’s difficult to understand how offerings overlap and connect, we want to hear from you: we’re looking for research and design partners.
Special Thanks
Special thanks to Eugenio (Neno) Battaglia, Alberto Brandolini , Chris Evans at Bayer, lisa gansky , Andrea Gioia , Marco Heimesoff, Alessandro Pirani , Matteo Roversi , Bosch MPS Team, Qi Card Team, and the Roche Platform Accelerator Team, among others.







