Decision Architecture
Speed is not strategy: unless decisions can still be explained six months later
Speed is one of the most misunderstood concepts in organizational strategy. The organizations that have built the most durable competitive positions are not, in my experience, the ones that made decisions fastest. They are the ones that made decisions at the right pace fast enough to capture opportunities before they closed, deliberate enough to produce choices that could be explained, defended, and learned from. The distinction matters more than it is usually given credit for, and the failure to make it has cost organizations I have worked with enormously.
This is not an argument for slowness. It is an argument for explainability and for understanding why explainability is not a bureaucratic constraint but a competitive asset.
The anatomy of a fast decision that fails slowly
The failure mode I want to describe is familiar to anyone who has worked inside a fast-moving organization. A decision is made quickly, confidently, by smart people with good intentions. It is announced. Implementation begins. Six months later, the context has shifted, and someone needs to revisit the decision. The problem is that no one can reconstruct why it was made. The analysis that supported it exists in fragments across three email threads and a presentation that has been revised six times. The assumptions that drove it were never written down. The alternatives that were considered if any were have been forgotten. The person who made the call has moved to a different role.
At this point, the organization faces a choice between two bad options. It can continue executing a decision whose logic it cannot evaluate, hoping the original judgment was sound. Or it can revisit the decision from scratch, consuming significant time and energy to reconstruct an analysis that was done once already. Neither option is good. Both were avoidable.
The cost of this failure mode is not always visible in financial terms, which is one reason it persists. The most significant cost is organizational learning or rather, the absence of it. Organizations that cannot trace outcomes to decisions and decisions to assumptions cannot learn from their experience. They can accumulate experience, which is different. Accumulated experience without structured learning produces confident people who repeat the same mistakes with greater conviction.
Case study: A fintech scale-up and the acquisition that could not be explained
The case that has most clearly illustrated this failure mode for me involved a fintech company that had reached Series C with approximately three hundred and forty employees and operations across four European markets. The company had built its culture deliberately around speed. The founding team five people who had known each other from a previous venture had a shared conviction that the incumbent financial institutions they were competing against were structurally slow, and that speed was therefore a structural advantage they could exploit. Decisions were made in Slack threads. Strategy pivots happened in weekend offsites. The founding team prided itself on moving faster than any competitor, and for two years this was a genuine advantage.
The culture of speed was visible in everything the organization did, including its acquisition of a smaller competitor eighteen months before I became involved. The acquisition had a logic the target had a product capability that the acquiring company lacked, a customer base in a market the acquirer had been trying to enter, and a team with domain expertise that would take years to build organically. The price was within the acquiring company's financial capacity. The deal was completed in eleven weeks from first contact to close, which was, by the standards of the industry, remarkably fast.
The problem became apparent when three of the five founders had transitioned out of operational roles within the following eighteen months two had moved to advisory positions, one had left entirely and a new CFO had been hired from outside. The board requested a strategic review of the acquisition: what had been the investment thesis, what synergies had been expected over what timeline, what had actually been achieved, and what the remaining value creation pathway looked like.
No one could answer these questions with any precision. This was not because the people involved were incompetent. It was because the decision had been made in a way that produced no durable record of its logic. The Slack threads from the acquisition period were technically accessible but practically useless hundreds of messages, most of them tactical, with the strategic reasoning distributed across fragments that required hours to reassemble and were still incomplete. The financial model that had justified the price existed in three versions, each modified after the deal closed, none of them labeled as the version that had actually driven the decision. The offsite notes from the weekend where the acquisition was first seriously discussed were in a personal Dropbox account belonging to a founder who was now in an advisory role and traveling extensively.
The board was not, ultimately, concerned that the acquisition had been made. What concerned them was what the inability to reconstruct its logic revealed about the organization's capacity to learn from its own history. If you cannot trace an outcome to a decision, and a decision to its assumptions, you cannot determine whether a positive outcome reflects good judgment or good fortune. You cannot identify which assumptions proved correct and which proved wrong. You cannot apply the lessons of this decision to the next one. You are, in effect, making each major decision for the first time, regardless of how much experience the organization has accumulated.
The review process itself took twelve weeks and cost the organization significantly in management time and external advisory fees. The irony was not lost on anyone: the speed with which the original decision had been made had, over time, produced a much larger expenditure of time and money than a more deliberate process would have required at the outset.
What explainability actually requires
The instinct of many organizations when confronted with this failure mode is to add bureaucracy more approvals, longer decision timelines, more documentation requirements. This is the wrong response, for two reasons. First, it treats explainability as a compliance exercise rather than a strategic discipline, which produces documentation that is thorough but useless documents that record what was decided without capturing why. Second, it imposes a uniform overhead on all decisions regardless of their importance, which is both expensive and demotivating.
The correct response is to develop what I think of as decision documentation discipline a lightweight, structured practice of capturing the logic of significant decisions at the moment they are made, in a form that is actually useful for future review rather than merely formally complete.
Effective decision documentation captures four things. First, the situation as it was understood at the time of the decision the specific context, constraints, and opportunity that made this decision necessary. This is more difficult than it sounds, because it requires the organization to describe reality as it appeared before the outcome was known, resisting the retrospective distortion that makes past decisions seem either more obviously correct or more obviously wrong than they actually were.
Second, the options that were genuinely considered not a post-hoc rationalization of why the chosen path was obviously superior, but an honest account of what alternatives were evaluated and why they were rejected. This requires that alternatives actually be considered before the decision is made, which is not always the case in fast-moving organizations where the decision and its justification are sometimes constructed simultaneously.
Third, the trade-offs that were accepted. Every significant decision involves accepting something a cost, a risk, an opportunity cost, a constraint. Documenting what was accepted, and what would have to be true for the acceptance to have been correct, is the foundation of organizational learning. It is also the basis for the fourth element: the trigger conditions for reassessment the specific developments that would indicate that the decision should be revisited, and the person or forum responsible for monitoring them.
This documentation does not require a long document. For most decisions, it requires a single page. For major strategic choices, it may require three or four pages. The overhead is not in the writing it is in the thinking that the writing forces, which is precisely where most fast-moving organizations underinvest.
Case study: A professional services firm and the partnership structure it could not evaluate
A second case illustrates a different dimension of the same failure mode. A professional services firm with approximately four hundred professionals across six offices had, seven years prior to my involvement, restructured its partnership from a lockstep compensation model to a performance-based model. The restructuring had been contentious at the time, had taken eighteen months to design and implement, and had produced significant partner attrition in the short term. The managing partner who had driven the change had subsequently built a strong narrative around it the firm was more meritocratic, more entrepreneurial, better at retaining high performers.
When that managing partner retired and a new leadership team took over, the first strategic review they commissioned raised a question that turned out to be unanswerable: was the compensation restructuring actually working? Not in terms of how partners felt about it there was plenty of survey data on that but in terms of whether it was producing the outcomes that had justified it. Had it improved the retention of high performers? Had it improved the firm's commercial performance? Had it attracted the kind of laterals the firm had been trying to recruit?
The answer, after six weeks of investigation, was that no one knew. The original decision had been made without any specification of what success would look like, over what timeline, or how it would be measured. There had been no baseline established before the change. There were no metrics that had been defined as the relevant indicators of whether the restructuring was achieving its purpose. The decision had been made, implemented, and then effectively closed treated as a completed project rather than an ongoing hypothesis about how to organize the firm.
The new leadership team faced the question of whether to continue, modify, or reverse the compensation structure without the information that would have allowed them to evaluate it. They were, in effect, making a new decision in a vacuum created by the poor documentation of the original one.
The intervention in this case involved reconstructing, as best we could from available data, the counterfactual what the firm's performance trajectory would likely have looked like without the restructuring and comparing it to the actual trajectory. This exercise was imprecise and contested, but it was the only available basis for evaluation. It produced a conclusion that the restructuring had had mixed effects: positive on some dimensions the original decision-makers had cared about, neutral or negative on others, and strongly positive on one dimension partner accountability that the original documentation had not emphasized but which turned out, in retrospect, to be the most significant benefit.
The new leadership team made a decision to retain the performance-based model with specific modifications, based on this analysis. They also established, for the first time, a set of explicit success metrics with defined measurement timelines, so that the next leadership transition would not face the same information vacuum.
Speed and explainability as complements, not substitutes
The conclusion I draw from these cases and from the broader pattern they represent is that speed and explainability are not in fundamental tension. The organizations that make decisions most sustainably fast are those whose thinking is clear enough that capturing it takes minutes rather than hours. The documentation overhead in a high-functioning decision culture is minimal, because the logic is explicit before the decision is made, and capturing it is simply a matter of recording what has already been articulated.
The organizations that find documentation burdensome are, in my experience, organizations where the decision logic was not fully developed before the decision was made. The documentation burden is a signal, not a cause it indicates that the decision was made on implicit reasoning that had never been fully articulated, and that making it explicit requires constructing the argument retrospectively rather than recording it in real time.
Building a culture of decision explainability therefore starts not with documentation templates or approval processes but with the quality of the decision conversation itself. Is the situation clearly understood? Are alternatives genuinely being considered? Are trade-offs being acknowledged rather than minimized? Is there a clear owner who will be accountable for the outcome? These are the questions that determine whether a decision can be explained six months later. The documentation is simply the record of a conversation that has already happened well.
A decision that cannot be explained six months later was probably not a strategic decision in any meaningful sense. It was a reaction confident, perhaps well-intentioned, but ultimately unaccountable. The goal of decision discipline is not to slow the reaction down. It is to convert reactions into decisions choices that are owned, reasoned, and capable of generating organizational learning regardless of how they turn out.
Stay informed
Get notified when we publish new insights on strategy, AI, and execution.