← Back to Insights

AI & Judgment

AI does not replace strategic judgment: it creates the conditions to exercise it better

By Moussa Rahmouni6 April 20269 min read

Every major technology wave of the past three decades has produced the same pattern of organizational response. A technology emerges. Early adopters achieve meaningful advantages. The mainstream observes this and concludes that the technology is the source of the advantage. Organizations rush to deploy the technology. Most of them do not achieve the expected results. A period of disillusionment follows. And then, eventually, the organizations that extracted real value from the technology are studied more carefully, and a different conclusion emerges: the advantage came not from the technology itself but from the organizational changes that the technology enabled and that the successful organizations had actually made.

This pattern has repeated with ERP systems, with the internet, with mobile, with cloud computing, and with data analytics. There is strong early evidence that it is repeating with artificial intelligence. The organizations that will extract structural value from AI are not those that have deployed the most sophisticated models or produced the most impressive demonstrations. They are those that have understood what AI actually does to organizational decision-making and have redesigned their decision architecture to take advantage of it.

Understanding what AI actually does requires first understanding what it does not do. AI does not make decisions. It does not produce judgment. It does not replace the human capacity to weigh consequences, navigate ambiguity, integrate values with analysis, or accept accountability for outcomes. These are not limitations that will be overcome by the next generation of models. They are definitional features of what judgment is and judgment, in the context of strategic decision-making, is precisely what organizations need from their leaders.

What AI does when deployed correctly is remove the conditions that prevent judgment from being exercised well. This is a significant contribution, and it is systematically underestimated by organizations that are focused on the substitution framing.

The conditions that prevent good judgment

Strategic judgment fails not because decision-makers lack intelligence or experience. It fails because of structural conditions that make it difficult to exercise judgment well, regardless of how capable the individual decision-maker is.

The first condition is information incompleteness. Strategic decisions typically require synthesizing information from multiple sources, across multiple time horizons, with varying degrees of reliability and relevance. Doing this manually is slow, expensive, and inconsistent. The synthesis that arrives at a decision meeting is typically prepared by a team of analysts working under time pressure, covering some sources thoroughly and others not at all, structured around the analytical framework the team found most tractable rather than the decision framework that the decision-maker actually needs.

The second condition is time compression. Most strategic decisions are made under some degree of time pressure opportunities have windows, competitive dynamics move, organizational attention is scarce. Time pressure degrades decision quality in predictable ways: it reduces the number of alternatives considered, it increases reliance on heuristics and prior patterns rather than analysis of the specific situation, and it reduces the willingness to acknowledge uncertainty and ambiguity that would, in a less pressured environment, prompt additional investigation.

The third condition is fragmentation. Strategic decisions rarely live in a single domain. A market entry decision involves commercial analysis, financial modeling, operational assessment, regulatory evaluation, and competitive intelligence disciplines that in most organizations are housed in separate functions, use different data sources, produce outputs in different formats, and rarely synthesize their perspectives into a coherent integrated view before the decision meeting.

The fourth condition is framing absence. The quality of a decision is largely determined by the quality of the question that precedes it. Most decision conversations begin without an explicit frame without a shared understanding of what options are being considered, what trade-offs are inherent in each, what assumptions each option rests on, and what information would change the decision. The framing emerges, if at all, from the conversation itself which means it is often incomplete, contested, or influenced by the structure of the presentation rather than the structure of the decision.

AI can address all four of these conditions directly. It can synthesize information at a speed and scale that no human team can match. It can do so at any point in the decision process, eliminating the single-shot preparation that characterizes most manual synthesis. It can integrate perspectives across domains that are typically siloed. And it can be used to generate explicit decision frames structured presentations of options, trade-offs, and assumptions before the human conversation begins.

Case study: A private equity firm and the platform that no one used

The case I want to describe in detail involves a private equity firm managing approximately four billion euros of assets across twelve portfolio companies. The firm had, eighteen months before my involvement, deployed a sophisticated AI platform for deal sourcing and portfolio monitoring. The platform had been selected after a competitive process, implemented by a dedicated team over six months, and presented internally as a strategic investment in the firm's analytical capabilities.

Eighteen months after go-live, adoption was below twenty-five percent. The investment committee the five senior partners who made all investment decisions rarely referenced the platform outputs in their deliberations. The platform team attributed the low adoption to change management failure: the partners needed better training, stronger internal champions, more encouragement to engage with the tool. This diagnosis was incorrect.

When I spent two weeks with the investment committee, observing their actual decision process rather than the process the platform had been designed to support, the correct diagnosis became clear. The platform was answering questions the investment committee was not asking.

The platform produced three primary outputs: market intelligence reports on sectors and geographies of interest to the firm, company profiles for potential acquisition targets, and portfolio monitoring reports for existing investments. All three were technically excellent comprehensive, regularly updated, well-structured. All three were also misaligned with the actual decision needs of the investment committee.

The market intelligence reports were too broad. The investment committee's actual mandate was narrow and well-defined specific sectors, specific geographies, specific transaction types. The reports covered significantly more territory than was relevant to their decisions, which meant that extracting the relevant content required manual filtering that most partners found not worth the time.

The company profiles were structured around the information that was publicly available rather than around the questions that the investment committee needed to answer. The profiles answered "what is this company?" rather than "does this company meet our investment criteria, and if so, which specific criteria make it interesting and which create concern?" This distinction sounds minor but is significant in practice: the former requires a reader to do the analytical work of applying investment criteria to the profile, while the latter does that work in advance and presents the result.

The portfolio monitoring reports were the most significant misalignment. They reported everything every metric, every development, every risk item for every portfolio company, in a standardized format, on a monthly basis. What the investment committee actually needed at each monthly portfolio review was a single prioritized question: which of our current positions has experienced a material change in its risk or opportunity profile since the last meeting, and what does that change imply for our management of that position? The platform had the data to answer this question. It had not been configured to do so because no one had asked the investment committee what question they actually needed answered.

The redesign and its results

The intervention began with three weeks of decision process mapping working with each partner individually to understand how they actually made investment decisions, what information they needed at each stage of the process, and what currently consumed the most preparation time. This mapping produced a clear picture of the gap between what the platform was producing and what was actually needed.

The platform reconfiguration addressed each gap specifically. The market intelligence outputs were restructured around a defined set of investment thesis parameters specific signals that would indicate an opportunity aligned with the firm's mandate rather than broad sector coverage. Instead of producing market intelligence reports, the platform began producing investment signal alerts: structured notifications when a company or market development met predefined threshold conditions that the investment committee had agreed were decision-relevant.

The company profiles were restructured around the firm's investment criteria rather than around publicly available information. Each profile now opened with an explicit assessment of fit against the firm's criteria a structured summary that told a partner, in two minutes, whether a company warranted further attention and why followed by the supporting detail for partners who wanted to go deeper.

The portfolio monitoring outputs were fundamentally redesigned. The monthly portfolio review report was restructured around a single prioritized list: companies that had experienced material changes since the previous review, ranked by the significance of the change, with a brief characterization of the change and its implications. Companies without material changes appeared at the end of the report in a summary section, confirming that they had been monitored but not flagging them for discussion. This structure reduced the preparation time for portfolio review meetings from approximately three hours per partner to approximately forty minutes and increased the quality of the discussions, because partners arrived with a shared, pre-processed understanding of what warranted attention.

The adoption results over the following two months were significant: from below twenty-five percent to above eighty percent of investment committee members using the platform regularly. More meaningful than the adoption metric was the change in the investment committee's decision process. Partners who had previously arrived at investment discussions with their own, individually prepared analyses which were sometimes based on different information, structured around different frameworks, and reached different conclusions now arrived with a shared analytical foundation that the platform had provided, and spent the meeting time on judgment and deliberation rather than on reconciling different analytical starting points.

The principle and its implications

The principle that this case illustrates is one I have come to think of as decision-first AI deployment: the practice of beginning any AI implementation in a decision context by mapping the actual decision architecture what decisions are being made, by whom, at what frequency, with what information, and at what points in the decision process before any technology choices are made.

Most AI implementations in decision contexts fail to do this. They begin with the technology with its capabilities, its outputs, its interfaces and work toward the decision context. This produces platforms that are technically impressive and organizationally irrelevant, because the technology is not aligned with the actual structure of the decisions it is supposed to support.

Decision-first deployment starts with the decision and works toward the technology. It asks: what does a decision-maker actually need, at what point in the decision process, in what format, at what level of detail? It then asks: which of these needs can be addressed by AI, and which cannot? And it designs the AI deployment around the answers to these questions, rather than around the default outputs of the platform being deployed.

This approach requires more upfront investment in decision process analysis than most organizations are accustomed to making. But it produces significantly better outcomes, for a simple reason: AI that is aligned with the actual decision needs of an organization removes friction from the decision process and improves the quality of human judgment. AI that is not aligned adds a new source of friction the work of filtering, translating, and applying platform outputs that are not structured around the decisions at hand.

The organizations that will extract the most value from AI in strategic contexts are not those with the most sophisticated models or the most comprehensive data. They are those that have done the work of understanding their own decision architecture how they make choices, what information those choices require, and where the friction currently lies and have designed their AI deployments to address that friction specifically. That work is organizational, not technological. It is also, consistently, the work that produces the results.

ShareLinkedInXEmail

Stay informed

Get notified when we publish new insights on strategy, AI, and execution.

MR
Moussa Rahmouni

Strategy & Program Manager — Founder of Stratelya & InekIA

LinkedIn →
View Profile →
← All InsightsBook a Diagnostic