← Back to Insights

Case Study

From planning to leverage: what complex programs teach about real decision-making

By Moussa Rahmouni6 April 202610 min read

Every organization that manages complex programs develops, over time, a theory of what makes programs succeed or fail. These theories are usually built from experience from programs that delivered well and programs that did not, from crises navigated and crises that became disasters, from clients who remained partners and clients who became adversaries. The theories are rarely made explicit, which means they are rarely tested and rarely improved. They operate as assumptions embedded in the way programs are organized, governed, and managed.

The theory I want to articulate in this essay has emerged from approximately fifteen years of working on complex programs across defense, infrastructure, financial services, and technology as a program manager, as a strategic advisor, and as a recovery specialist brought in when programs had already failed in significant ways. The theory is not complicated, but its implications are broader than they might initially appear.

The theory is this: complex programs fail when the distance between where the program is and where the plan says it should be exceeds the organization's ability to close it through legitimate means, and the organization responds by closing the distance through illegitimate means specifically, through the maintenance of a plan that no longer reflects reality.

When this happens, the plan stops functioning as a decision instrument. It becomes, instead, a reporting artifact a document that is maintained to satisfy governance requirements and manage client expectations, rather than to govern the actual work. And when the plan stops governing the work, the work stops being governed.

The planning problem in complex programs

To understand why this failure mode is so common, it helps to understand the structural pressures that push programs toward it.

Complex programs I am defining these as programs with multiple interdependent workstreams, significant technical uncertainty, long durations, and external dependencies are characterized by a fundamental tension between the need for a fixed reference point and the reality of continuous change. The contract requires a delivery date. The client's downstream planning depends on that date. The prime contractor's resource planning depends on it. The commercial terms incentive fees, milestone payments, liquidated damages are structured around it. In this context, there are powerful organizational incentives to maintain the scheduled delivery date in the formal plan even when the underlying evidence suggests that the date is no longer achievable.

This is not always dishonesty, and it is important to understand that. In my experience, the majority of program status reporting failures situations in which a program's official status is significantly more positive than its actual status are not the result of deliberate deception. They are the result of genuine belief in recoverability, combined with governance structures that do not force the honest integration of evidence that would challenge that belief.

Each workstream lead knows the status of their workstream. Each believes, with varying degrees of justification, that the current delay is recoverable through additional effort, parallel working, or compression of activities that had previously been sequenced. Each reports a status that reflects their genuine belief rather than their best estimate. But the aggregate of individually optimistic workstream reports is almost always more optimistic than the integrated reality, because the individual optimism does not account for the compounding effect of parallel delays across interdependent workstreams.

The program management office, which is responsible for aggregating these reports into an integrated program status, typically lacks either the authority or the analytical resources to challenge the workstream leads' assessments. The integrated status report therefore reflects the aggregated optimism of the individual workstream reports rather than an independent assessment of the integrated program reality.

The client, receiving the integrated status report, is making downstream decisions resource commitments, operational planning, contractual obligations based on a delivery forecast that does not reflect the program's actual trajectory. When the real delivery date eventually becomes apparent, the consequences extend beyond the program itself to all the decisions the client has made in reliance on the incorrect forecast.

Case study: A financial sector transformation program

The program I want to describe was a core banking system replacement for a regional financial institution a program type that has an extraordinarily high failure rate, for reasons that are well understood in the industry but apparently difficult to internalize before the fact. The program had a contracted value of approximately sixty-five million euros and a planned duration of thirty months.

I was engaged as a senior advisor to the program at month eighteen, following a board-level review that had concluded, without specific evidence, that the program was "at risk." The board's instinct was correct. By month eighteen, the program was running approximately seven months behind the integrated schedule, a fact that had not been communicated to the board or to the client organization's leadership. The program's official status was amber an accurate characterization if amber is understood to mean "there are risks and issues being managed," but misleading if amber is understood to mean "the program is recoverable within the contracted parameters."

The diagnostic work that followed the board review produced a clear picture of how the gap between official and actual status had developed.

The program had three major workstreams: core system configuration and customization, data migration, and business process redesign. Each workstream had its own project manager, its own schedule, and its own governance forum. The three workstream schedules were theoretically integrated in a master program schedule, but the integration was nominal the master schedule was updated by consolidating the workstream schedules rather than by independently analyzing the dependencies between them.

The data migration workstream had experienced significant difficulties from month six onward, driven by data quality issues in the legacy systems that had been underestimated during the assessment phase. These difficulties were well-known within the data migration team and had been escalated to the data migration project manager. They had not been escalated to the program director, because the data migration project manager believed with some justification that the issues were being resolved and that the schedule impact would be recovered through acceleration in subsequent phases.

The business process redesign workstream had experienced significant scope creep, driven by business stakeholders who had identified process improvement opportunities that had not been in the original scope but that were genuinely valuable. Each individual scope addition had been assessed as manageable within the existing timeline. The cumulative effect of eighteen months of incremental scope additions had not been assessed.

The core system configuration workstream was actually close to its planned schedule the one workstream that was performing reasonably well. But the core system configuration workstream depended on inputs from both the data migration workstream and the business process redesign workstream, which meant that its actual progress was less useful than it appeared, because the program could not advance to the integration and testing phase until the other two workstreams had reached their planned completion points.

When we modeled the integrated program accounting for the data migration delays, the business process redesign scope additions, and the dependency structure between the three workstreams the delivery forecast extended by seven months beyond the contracted completion date. When we communicated this to the client, the reaction was, as expected, severe. The client had been planning a phased decommissioning of the legacy system around the contracted delivery date. Those plans now needed to be fundamentally revised, with significant operational and cost implications.

The recovery architecture for complex programs

The recovery architecture we designed for this program drew on principles I have applied across multiple similar situations, with adaptations for the specific context.

The first principle is honest baseline establishment. Recovery planning must begin with an honest assessment of where the program actually is not where the plan says it should be, not where the team believes it will be after recovery efforts take effect, but where it demonstrably is today. This requires independent analysis rather than reliance on workstream self-assessment, because workstream self-assessment is subject to the same optimism bias that produced the problem in the first place.

For this program, honest baseline establishment required three weeks of intensive analysis reviewing detailed work completion evidence across all three workstreams, modeling the dependency structure of the remaining work, and constructing a bottom-up delivery forecast that was defensible against challenge. The resulting forecast was more pessimistic than any individual workstream lead's assessment, for the reasons described above: the individual assessments did not adequately account for the compounding effects of parallel delays across interdependent workstreams.

The second principle is governance redesign before replanning. The instinct of most recovery teams is to begin with replanning to build a new schedule that gets the program back on track. This instinct is wrong, because it treats the symptom rather than the cause. The reason the program drifted from the original plan was not that the original plan was poorly constructed it was that the governance structure failed to surface and respond to the drift as it was developing. A new plan without governance redesign will drift for the same reasons.

For this program, governance redesign involved three specific changes. First, we established a weekly integrated program board not a workstream review forum, but a forum at which the program director, all three workstream leads, and key client representatives reviewed the integrated program status against the integrated baseline. Second, we introduced a formal variance threshold: any deviation of more than one week from the critical path in the integrated schedule required an explicit escalation decision either a formal plan to recover the variance or a formal acknowledgment that the variance would be accepted and downstream dates would be adjusted accordingly. Third, we established a monthly independent program health assessment, conducted by a small team with no reporting relationship to any of the workstream leads, whose output was provided directly to the client program director and the prime contractor board.

The third principle is client transparency before client management. The default instinct in a program recovery situation is to manage client expectations to present the situation in the most favorable light consistent with honesty, to emphasize recovery actions over current reality, and to delay difficult conversations for as long as possible. This instinct is understandable but counterproductive. Clients who discover that they have been managed rather than informed typically become significantly more difficult to work with than clients who have been given honest assessments from the beginning, even when those assessments are unwelcome.

For this program, client transparency meant presenting the revised delivery forecast seven months later than contracted in the first client steering committee meeting following the completion of our diagnostic work. The presentation included not only the revised forecast but the full analysis of how the gap had developed, what governance failures had prevented it from being surfaced earlier, and what the recovery architecture was designed to prevent recurrence. The initial client reaction was severe. The relationship recovered, over approximately six months of consistent honest reporting, to something approaching genuine partnership.

The leverage principle in program management

The title of this essay refers to leverage the concept that in complex programs, the value of good governance is disproportionate to the cost of establishing it. This is worth making explicit, because it is the strongest argument for the investment that governance redesign requires.

In the financial sector transformation program described above, the governance failures that allowed the program to drift seven months from plan had, by the time they were corrected, produced costs significantly in excess of what a well-functioning governance structure would have cost. The replanning exercise, the recovery team engagement, the extended program duration, the client relationship damage, and the downstream costs to the client organization of rearranging its decommissioning plans the aggregate of these costs substantially exceeded what it would have cost to build a proper integrated master schedule, establish a cross-workstream governance forum, and introduce an independent health assessment from the beginning of the program.

This is consistently true in complex program failures. The cost of governance failure is always greater than the cost of governance investment. The challenge is that the cost of governance investment is visible and immediate, while the cost of governance failure is contingent and delayed. Organizations consistently underinvest in governance because the benefit of the investment is realized in the form of problems that do not occur which are, by definition, invisible.

The practice of program management that I have found most valuable, and that I consistently recommend to program directors regardless of program type, is to treat governance investment as a risk mitigation measure with a calculable expected value. The probability of a major program performance failure, absent adequate governance, is significantly higher than most program teams acknowledge before the fact. The cost of such a failure, when it occurs, is significant and predictable in its general magnitude. The cost of the governance investment that would have prevented it is also predictable and is, in virtually every case I have analyzed, considerably smaller. The arithmetic is compelling. The behavioral challenge investing in prevention when the contingency feels remote is real but surmountable with the right framing.

Planning, in a complex program, is not prediction. It is leverage. A plan that maintains its governance function that continues to reflect reality, to surface problems early, and to force honest decision-making about constraints and trade-offs is the most powerful risk mitigation tool available to a program director. Protecting that function, even when it is uncomfortable to do so, is the core discipline of program leadership.

ShareLinkedInXEmail

Stay informed

Get notified when we publish new insights on strategy, AI, and execution.

MR
Moussa Rahmouni

Strategy & Program Manager — Founder of Stratelya & InekIA

LinkedIn →
View Profile →
← All InsightsBook a Diagnostic