Execution & Performance
A brilliant strategy, poorly executed, loses to an average strategy executed with discipline
The most persistent and expensive myth in strategic management is that execution is a downstream problem something that happens after strategy is set, managed by a different set of people with a different set of skills, and fundamentally separate from the work of strategic thinking. This myth has survived decades of evidence to the contrary because it is convenient for the people who produce strategies and uncomfortable for the organizations that have to implement them.
The reality is different. Execution is not the implementation of strategy. It is the test of whether strategy was real. A strategy that cannot be executed is not a strategy it is an aspiration, or a document, or a conversation. The gap between strategic intent and organizational reality is not a failure of implementation. It is a failure of strategy specifically, a failure to design a strategy that accounts for the organizational conditions required to execute it.
This reframing has significant practical implications. It means that the question "how do we execute our strategy?" is often the wrong question. The right question is "did we design a strategy that is executable with the organization we actually have?" And if the answer is no, the path forward is not better project management it is strategic redesign.
The four execution failure modes
In my work across defense, financial services, manufacturing, and technology, I have identified four recurring failure modes that account for the majority of significant execution failures. They are not equally visible, not equally understood, and not equally amenable to intervention but they are consistent enough across contexts to constitute something like a diagnostic framework.
The first failure mode is cadence absence. Organizations that execute poorly almost universally lack a structured, regular mechanism for confronting reality against plan. They have project reviews, steering committees, and executive meetings but these forums are organized around reporting rather than decision-making, around status updates rather than variance management, around showing progress rather than identifying and resolving the problems that are impeding it. The result is that significant variances from plan are visible in the data long before they are acknowledged in any governance forum, and by the time they are acknowledged, the options for resolution have narrowed considerably.
The second failure mode is accountability diffusion. Strategic priorities fail to deliver not because the work is not being done but because the work is not clearly owned. A strategic initiative has a sponsor, a steering committee, a workstream lead, a project manager, and a functional owner and none of them believes they are primarily responsible for the outcome. When the initiative underdelivers, each can point to a legitimate reason why it was someone else's accountability. The problem is structural: the governance architecture has distributed accountability so broadly that it effectively belongs to no one.
The third failure mode is planning optimism. Organizations consistently build plans that reflect aspirations rather than constraints. Bottom-up planning processes produce timelines that are internally consistent within each workstream but ignore the cross-workstream dependencies and organizational capacity constraints that determine whether the aggregate plan is achievable. Top-down planning processes apply pressure to compress timelines without confronting the technical or organizational reality that determines how long things actually take. The result in both cases is a plan that everyone knows is unrealistic but that no one has formally acknowledged as such a collective fiction that is maintained until reality makes it untenable.
The fourth failure mode is feedback loop absence. Execution without feedback is navigation without instruments. Organizations need mechanisms that surface early signals of deviation from plan leading indicators that create time to intervene before a variance becomes a crisis. Most organizations rely on lagging indicators: financial results, delivery milestones, customer satisfaction scores. These are useful for evaluating past performance but provide little time to course-correct when they signal a problem. By the time a lagging indicator turns red, the underlying cause has typically been operating for months.
Case study: A defense program and the governance failure that cost eighty million euros
The case that has most comprehensively illustrated all four failure modes was a defense program with a contracted value of approximately one hundred and eighty million euros and a four-year delivery timeline. I was brought in at the end of the second year, when the program director had been replaced and the client relationship had deteriorated to the point where legal action was being discussed.
The program had an excellent strategy document. The market analysis was rigorous, the competitive positioning was clear, the value proposition was well-articulated, and the commercial terms had been carefully negotiated. The program had also, by the time I arrived, a forty percent schedule overrun, a cost variance of twenty-three million euros, and a client who had, in the words of the new program director, "stopped believing anything we tell them."
The diagnosis, which took approximately three weeks of intensive analysis, identified contributions from all four failure modes.
On cadence: the program had a monthly steering committee and a weekly program review. But the steering committee had evolved into a presentation forum the program team presented status, the client received it, questions were asked, and the meeting ended. There was no structured mechanism for surfacing variances above a defined threshold, no escalation protocol that required the steering committee to make explicit decisions when the program deviated from plan, and no record of decisions made in previous meetings that could be used to track whether commitments had been honored. The weekly program review had similar characteristics it was a reporting forum, not a decision forum.
On accountability: the program had three major workstreams technical development, systems integration, and regulatory compliance each with its own lead. Each workstream lead reported to the program director, but each also had a functional reporting line to a divisional director within the prime contractor. When the program experienced cross-workstream conflicts which in a program of this complexity were frequent there was no clear protocol for resolving them. Each workstream lead could escalate to the program director, but the program director's authority to direct workstream resources was constrained by the functional reporting relationships. The result was that cross-workstream conflicts were frequently resolved by delay rather than by decision.
On planning optimism: the original program schedule had been built in a seven-week period during the bid phase, by a team under significant commercial pressure to produce a timeline that the client would accept. The schedule was technically detailed and visually impressive, but it had been constructed by assembling workstream estimates without adequately modeling the dependencies between them or the organizational capacity constraints that would determine how many parallel activities the program could actually execute. By month six, the program was already running three weeks behind on the critical path, but this variance was not visible in the aggregate schedule metrics because it was masked by work that had been completed ahead of plan in lower-priority areas.
On feedback loops: the program's performance reporting relied almost entirely on milestone achievement a binary indicator that captured whether a deliverable had been completed by its planned date but provided no early warning of problems developing within a milestone period. The first indication that the systems integration workstream was in serious difficulty came when a major milestone was missed in month fourteen. The underlying technical problems that caused the miss had been visible to the workstream team for approximately four months, but there was no mechanism that required them to surface these problems before they became milestone failures.
The recovery plan addressed all four failure modes simultaneously, which is unusual most recovery plans focus on replanning the schedule and adding resources without addressing the governance structures that allowed the problems to develop in the first place. We restructured the steering committee as a decision forum with explicit escalation thresholds, defined accountability for each workstream in terms that made it real and consequential, rebuilt the program schedule from the delivery date backward rather than from the current position forward, and introduced a set of leading indicators technical progress metrics, resource utilization rates, risk realization rates that provided early warning of developing problems.
The program delivered approximately fourteen months late against the original contracted schedule, and at a cost overrun of approximately thirty million euros. These are significant numbers. But the trajectory at the point of intervention, if extrapolated, would have produced a delivery of twenty-two to twenty-six months late and a cost overrun of sixty to eighty million euros. The recovery plan preserved approximately fifty million euros of value relative to the likely alternative.
The operating system concept
The framing I have found most useful for helping organizations understand what execution discipline actually requires is the concept of an operating system. Every organization has a strategy a set of choices about what to pursue and how. But not every organization has built the operating system that allows strategy to be translated into consistent, coordinated action.
An operating system, in this sense, is the collection of mechanisms that govern how an organization works: how priorities are set and maintained, how resources are allocated and reallocated, how decisions are made and by whom, how performance is measured and reviewed, how problems are surfaced and resolved, and how accountability is assigned and enforced. These mechanisms are not visible in an organizational chart and they are not described in a strategy document. They operate in the space between the formal structure and the actual behavior of the organization.
Organizations that execute consistently are not necessarily those with the best strategies, the most resources, or the most talented people. They are the organizations whose operating systems are well-designed for the strategies they are trying to execute. The operating system translates strategic intent into organizational reality or fails to, in ways that are predictable once you know what to look for.
Building a better operating system for execution is the work that most organizations underinvest in, for understandable reasons. It is less prestigious than strategy development. It is less visible than transformation programs. It is slower to show results than crisis management. And it requires confronting organizational realities accountability gaps, planning fictions, governance dysfunctions that are uncomfortable to acknowledge. But it is also, consistently and significantly, more valuable than any of the alternatives.
The organizations that close the execution gap are not those that work harder or move faster. They are those that have built the infrastructure to make execution the organizational default rather than the exception. Strategy is the frame. The operating system is what produces outcomes. The gap between the two is where most competitive advantage is created or destroyed.
Stay informed
Get notified when we publish new insights on strategy, AI, and execution.