← Back to Insights

Strategic Intelligence

Why organizations do not suffer from information scarcity: but from decision scarcity

By Moussa Rahmouni6 April 202613 min read

There is a moment in most executive committee meetings that I have come to recognize immediately. Someone pulls up the dashboard. A debate starts not about strategy, not about the decision at hand, but about whether the numbers are correct. Twenty minutes disappear. The CFO questions the methodology. The COO has different figures from his team. The Chief Strategy Officer waits, increasingly impatient, for the conversation to arrive at the question she came to answer.

This is not an information problem. The organization has seventeen dashboards, eleven analysts, and real-time feeds from fourteen systems. It has more data than it can process. The problem is something else entirely and understanding what it actually is may be the most important diagnostic any leadership team can make.

The problem is decision architecture. And most organizations have never built one.

What decision scarcity actually means

Information scarcity the idea that organizations fail because they lack data was a legitimate concern in the 1980s and 1990s. It drove a generation of investment in ERP systems, data warehouses, and business intelligence platforms. The assumption was that if you could see the business clearly enough, you would know what to do.

That assumption turned out to be wrong, or at least radically incomplete. Most organizations today have solved the information problem. They have not solved the decision problem. The bottleneck has moved, but the mental model has not kept pace.

Decision scarcity manifests in three distinct failure modes, each of which I have observed repeatedly across industries and organizational sizes.

The first is decision delay. This is the most visible failure mode the decision that is always two weeks away from being made, perpetually waiting for one more piece of analysis, one more stakeholder consultation, one more scenario. The organization has confused rigor with paralysis. Decisions that should take a week take three months. By the time they are made, the market context has shifted, the window has closed, or the cost of delay has exceeded the cost of a suboptimal choice made on time.

The second is decision diffusion. This is more dangerous because it is less visible. The decision appears to have been made there was a meeting, there was a presentation, heads nodded but no one actually owns the outcome. Accountability is distributed across a steering committee, a working group, and three separate functions, which means it belongs to no one. When the outcome materializes six months later, everyone can point to a legitimate reason why it was someone else's responsibility.

The third is reversibility confusion. Organizations routinely treat strategic commitments as if they were operational adjustments, and vice versa. A major market entry decision gets made in a thirty-minute slot at the end of a board meeting, while a relatively minor procurement choice goes through six weeks of governance. The mismatch between decision importance and decision process is so common as to be nearly universal, and its costs are significant. High-stakes decisions made too lightly produce strategic regret. Low-stakes decisions made too heavily produce organizational exhaustion and a culture that avoids decision-making altogether.

Case study: A European industrial group

The case that has most clearly illustrated these dynamics for me involved a European industrial group with revenues of approximately 2.4 billion euros, operating across four product divisions and twelve countries. The group had, by any reasonable standard, invested heavily in its information infrastructure. Three years prior to my involvement, it had deployed a business intelligence platform that was, technically speaking, excellent. Seventeen dashboards. Real-time data feeds from fourteen enterprise systems. A dedicated analytics team of eleven professionals. The platform had won an internal technology award.

The problem was not visible in any of these metrics. It became visible only when I spent two days sitting in on executive committee meetings.

The pattern was consistent across both meetings. The first thirty to forty minutes of each session were consumed by debate about data quality, methodology, and which numbers were the authoritative version of reality. The commercial director would cite figures from the commercial dashboard. The CFO would note that these did not match the financial reporting. The COO had operational metrics from his team's system that told a different story. Each was technically correct within its own domain. None of them was organized around the decisions the executive committee was actually trying to make.

When I mapped the decisions that the executive committee needed to make over the following twelve months not the decisions they were currently making, but the decisions that actually mattered for the strategic direction of the group something became immediately apparent. Of the seventeen dashboards, none had been designed around any of those decisions. They had been designed around functions and reporting units. Each was a window into a particular part of the organization. None of them was structured to answer a strategic question.

The intervention we designed was not technical. We did not build new dashboards, add data sources, or improve the analytics platform. Instead, we started with a workshop in which the executive committee mapped their actual decision landscape: what decisions would they need to make in the next twelve months, what information was genuinely required to make each decision well, who had the authority to decide, and what would constitute a decision trigger the specific condition under which a call needed to be made rather than deferred.

This exercise took two days and was, by the admission of several committee members, the most clarifying strategic conversation they had had in years. Not because it produced new information, but because it made explicit a set of assumptions and structures that had been operating implicitly and dysfunctionally for a long time.

The output was a decision map for the following year, organized not by function but by strategic choice. For each major decision, we documented: the information required to make it (which turned out to be significantly less than was being produced), the person accountable for the recommendation, the person with decision authority, the timeline, and the trigger conditions. We then worked backward from this map to assess which existing dashboards served actual decision needs and which existed because someone had once thought they might be useful.

Seven of the seventeen dashboards were decommissioned within six weeks. Three new indicators were created none of which had existed in the platform before, because no one had ever asked the question they were designed to answer. The analytics team was reorganized around decision support rather than reporting, which changed both what they produced and how they communicated it.

The results over the following six months were measurable. Executive committee time spent on actual strategic decisions as opposed to data reconciliation and methodology debate went from approximately ten minutes per meeting to approximately fifty-five minutes. Three major decisions that had been pending for between four and eight months were made within thirty days of the new structure being operational. One of those decisions a market exit in a declining product category is estimated to have preserved approximately forty million euros in capital that would otherwise have been deployed into a structurally unviable business.

The data had not changed. The decision architecture had.

The anatomy of a decision architecture

A decision architecture is the set of mechanisms that govern how an organization makes choices how decisions are framed, escalated, authorized, documented, and reviewed. Most organizations have a governance structure (boards, committees, approval thresholds) but not a decision architecture. The difference matters.

Governance structures define who can approve what up to what limit. Decision architectures define how the organization thinks about choices: what options are considered, what trade-offs are acknowledged, what information is actually required versus merely available, who owns the outcome, and how the decision will be evaluated after the fact.

There are four components that I have found consistently present in organizations that make decisions well, and consistently absent or weak in organizations that do not.

The first is decision rights clarity. Every significant decision should have a named owner not a committee, not a function, not a process, but a specific individual who is accountable for both the recommendation and the outcome. This does not mean that decisions are made unilaterally. It means that when the decision has been made and the outcome has materialized, there is a clear answer to the question of who was responsible. The accountability must be real, which means it must have consequences positive when decisions are well-made and the organization learns from them, negative when accountability is avoided or the decision logic is poor.

The second is decision framing discipline. The quality of a decision is determined largely by how the question is framed before anyone starts gathering data or building models. Most organizations frame decisions too narrowly (should we do X?) rather than as genuine choices with real trade-offs (given constraints A, B, and C, which of options X, Y, and Z best serves objective O, and what are we accepting when we choose it?). Framing discipline means investing time in the question before investing time in the answer.

The third is information parsimony. The relevant question for any decision is not what information is available but what information is actually decision-relevant. These are very different questions, and conflating them produces the kind of analytical paralysis I described in the industrial group case. Decision-relevant information is the minimum set of data that would change the decision if it changed. Everything else however interesting, however accurate, however expensive to produce is noise in the context of that particular choice.

The fourth is decision rhythm. Organizations that make decisions well have a cadence a structured, regular mechanism for surfacing decisions that need to be made, allocating time and attention to them proportionally to their importance, and ensuring that the right conversations happen at the right frequency. Ad hoc decision-making where decisions surface when they can no longer be avoided is a structural pathology, not a leadership style. It produces decisions under duress, with compressed timelines and inadequate preparation, at precisely the moments when the cost of poor choices is highest.

Case study: A financial services firm navigating regulatory change

A second case illustrates the decision rhythm failure mode and its costs. A mid-sized financial services firm operating across three European markets faced a significant regulatory change a new framework that would require material adjustments to its product structure, client documentation, and internal controls within an eighteen-month window. The firm's leadership was aware of the change eighteen months before the deadline. It made the deadline with three weeks to spare, at a cost approximately four times higher than a structured approach would have required, with two senior executives having resigned in the final quarter due to the pressure and organizational dysfunction the compressed timeline produced.

The regulatory change itself was not the problem. The problem was the decision rhythm or rather, the absence of one. The firm had no mechanism for surfacing strategic decisions before they became operational emergencies. The regulatory change required approximately fifteen distinct decisions, ranging from product architecture choices to technology platform selection to client communication strategy. Each of these decisions had natural dependencies: some could not be made until others were resolved, some needed to be made early to give the organization time to implement, and some required external input that had long lead times.

None of this sequencing was apparent to the leadership team because no one had ever mapped the decision landscape. The regulatory change was treated as a project a set of tasks to be completed rather than as a set of choices to be made in a particular order. When I was brought in nine months before the deadline, the firm had completed a significant amount of analytical work but had made almost none of the decisions that the analysis was designed to support. The analysis had become an end in itself, a way of appearing to address the challenge without actually committing to a direction.

The intervention at this stage was necessarily more disruptive than it would have been twelve months earlier. We constructed a decision sequencing map essentially a dependency graph for the fifteen decisions and used it to identify the critical path: the four decisions that needed to be made immediately because everything else depended on them, and the specific information that was actually required to make each one. Three of the four critical decisions were made within two weeks. The fourth required one additional piece of external legal opinion, which was obtained within ten days.

With the critical path decisions resolved, the downstream decisions became significantly easier and faster to make. The remaining eleven decisions were resolved within six weeks. Implementation began three months earlier than it would have under the firm's original trajectory, which reduced both the cost and the operational disruption considerably.

The lesson is not that analysis is bad or that speed is always good. The lesson is that analysis without a decision trigger is an organizational comfort behavior rather than a strategic tool. The firm was producing analysis because analysis is easier than commitment. A decision architecture forces commitment not recklessly, but at the appropriate moment, with the appropriate information, by the appropriate person.

The role of AI in decision architecture

A question I am asked increasingly often is where artificial intelligence fits into decision architecture. The answer is both specific and important, because it is almost always misframed in the organizations asking it.

AI does not make decisions. It does not make decision architecture unnecessary. What it does when deployed correctly is compress the distance between information and structured choice, in ways that can materially improve the quality and speed of human decision-making.

The specific contribution of AI to decision architecture is in three areas. First, synthesis: AI can aggregate and synthesize information from multiple sources at a speed and scale that no human team can match, producing structured summaries of relevant context before a decision conversation rather than during it. Second, framing: AI can be used to stress-test decision frames, identify options that have not been considered, and surface the assumptions embedded in a proposed course of action. Third, pattern recognition: AI can identify historical precedents for current decisions cases where similar choices were made under similar conditions and surface the outcomes of those precedents as reference points for current decision-makers.

None of these contributions replaces judgment. They improve the conditions under which judgment is exercised. The executive who enters a decision conversation with a pre-synthesized picture of the relevant context, a clearly framed set of options, and a set of historical reference points is not having her judgment replaced she is having it better prepared. The decision is still hers. The accountability is still hers. The judgment is still hers. The friction has been reduced.

This is the correct framing for AI in strategic decision-making: friction reduction, not judgment replacement. Organizations that pursue the latter will be disappointed. Those that pursue the former, and build the decision architecture that allows AI-synthesized information to flow into well-structured decision processes, will extract real value.

Building a decision architecture: where to start

The organizations I have worked with that have successfully built decision architectures have almost universally started in the same place: not with technology, not with process redesign, but with a mapping exercise.

The mapping exercise asks three questions. What decisions does this organization actually need to make not the decisions it is currently making, but the choices that determine whether its strategy is executed or abandoned? Who owns each of those decisions not who is consulted, not who approves, but who is accountable for the recommendation and the outcome? And what information is genuinely required to make each decision well not what is available or what has historically been produced, but what would actually change the decision if it changed?

This exercise typically takes two to three days for a leadership team, and it is almost always the most clarifying strategic conversation that team has had in recent memory. Not because it produces new information, but because it makes visible the architecture or its absence that has been governing how the organization thinks about choice.

From the map, the interventions become clear. Some decisions need clearer ownership. Some need better framing disciplines. Some need different information less of what is currently produced, more of what is actually required. Some need a rhythm that does not currently exist. And some decisions that are currently consuming organizational attention are not decisions at all they are questions with clear answers that have been treated as open because no one has accepted the accountability for closing them.

Intelligence without a decision trigger is noise with better formatting. The organizations that extract the most value from their information investments human analysts, AI platforms, or both are those that have done the harder work first: defining what a good decision looks like, who owns it, what information it actually requires, and when it needs to be made. That infrastructure is the prerequisite for everything else. Without it, more data produces more debate, better analysis produces more delay, and faster systems produce faster drift.

The gap between information and decision is not a technology problem. It is an architecture problem. And like all architecture problems, it requires deliberate design rather than incremental accumulation.

ShareLinkedInXEmail

Stay informed

Get notified when we publish new insights on strategy, AI, and execution.

MR
Moussa Rahmouni

Strategy & Program Manager — Founder of Stratelya & InekIA

LinkedIn →
View Profile →
← All InsightsBook a Diagnostic