From Principle to Priority: Chapter 8
The “Minimum Viable Prototype” vs. The “Minimum Viable Product”
This 10-article series is designed to disrupt common thinking about innovation. It provides a new framework to help you tear down old methods and invest more efficiently in breakthrough ideas. Please share!
STAGE 3 - THE OPTION TO EXECUTE (Prototyping & Strategy)
We have successfully navigated the first two gates of our staged investment process. Our Option to Explore (Stage 1) used First Principles and qualitative JTBD to transform a vague subject into a precise, high-stakes Job-to-be-Done. Our Option to Validate (Stage 2) used that qualitative insight to build a quantitative instrument—the Job Map and Customer Success Statements—which gave us a data-driven, rank-ordered list of the most critical, underserved needs in the entire market.
Our Heatmap from Chapter 7 was conclusive. The market for carbon accounting is not failing at “calculating” or “reporting.” It is profoundly failing at the messy, high-risk, “data wrangling” steps of Locate, Prepare, and Resolve.
We now have a validated, quantitative specification, an irrefutable, data-driven mandate. We know exactly what problems to solve.
This brings us to our third and final funding gate: the Option to Execute. This is the stage where we finally build something. This is also the stage where the vast majority of innovation projects, even well-researched ones, collapse under the weight of their own ambition.
The team, flush with excitement from their quantitative data, makes a critical mistake. They present their findings to the investment committee and ask for $5 million and 12 months to build a “Minimum Viable Product.”
The term “Minimum Viable Product” (MVP), popularized by the lean startup movement, is one of the most misunderstood and value-destroying concepts in modern innovation. As commonly practiced, the “product” in MVP is taken literally. It implies building the smallest version of a scalable, shippable, production-grade piece of software. It implies user accounts, databases, cloud infrastructure, and a user interface. It implies building the factory.
This is a trap. It is a premature optimization of the highest order.
The team is asking for a multi-million dollar investment to scale a solution whose core mechanic has not even been proven. They are conflating proving the solution works with automating the delivery of that solution.
We must introduce a crucial, de-risking step between the specification and the scalable product. We must first build a Minimum Viable Prototype (MVPr).
The distinction is not semantic; it is strategic:
A Minimum Viable Product (MVP) is a revenue tool. Its goal is to be the smallest thing a customer will pay for, allowing the business to test pricing, acquisition channels, and scalability. It is an external tool for testing the market.
A Minimum Viable Prototype (MVPr) is a de-risking tool. Its goal is to be the lowest-cost, lowest-fidelity “thing” required to prove that our proposed solution mechanic can actually get the job done 10x better. It is an internal tool for testing the hypothesis.
The MVPr is designed to be thrown away. It is a simulation of the final product, built with the explicit goal of learning rather than earning. Its primary purpose is to validate the solution before we write a single line of production code.
Let’s apply this to our CFO.
The trap would be to immediately start designing a beautiful, multi-tenant SaaS platform with slick dashboards, automated data ingestion pipelines, user authentication, and reporting modules. This is building the factory before we even know if the engine works.
The Option to Execute is not an option to scale; it is an option to prove. The correct MVPr is a “Concierge” or “Wizard of Oz” prototype.
Here is what that looks like:
Instead of a $5 million software budget, we ask for a $150,000, 3-month budget. We use this to hire a small “tiger team”: one data scientist, one financial analyst (or accountant), and a project manager. We then go back to 3-5 of the CFOs who participated in our survey—specifically the ones who showed the highest “Importance” and lowest “Satisfaction” for the “Locate, Prepare, and Resolve” needs.
We do not sell them a software subscription. We offer them a free, 3-month concierge service. We say:
“Our research shows the hardest part of your job is gathering, standardizing, and correcting your environmental data. For the next quarter, we will do it for you. We will be your on-demand data-wrangling team. All we ask in return is that you give us access and feedback.”
The “product” is this tiger team. The “interface” is email and the telephone.
Here is how the MVPr executes the job:
LOCATE: The client’s team emails our tiger team a chaotic collection of data: utility portal logins, raw spreadsheets of energy usage, PDFs of supply chain manifests, and notes from site managers.
PREPARE: Our data scientist and analyst get to work. They are not using a scalable platform; they are using Python scripts, Excel macros, and sheer human brute force. They manually log into portals, copy-paste data, and wrestle the fragmented, messy information into a single, standardized, auditable master file.
RESOLVE: During this process, they find dozens of errors—anomalies, gaps, and outliers (the very things we found in our “Resolve” step). They get on the phone with the client’s plant manager in Omaha to ask why one building’s energy usage tripled in July. They investigate, find the root cause (a new, un-metered production line), and document it.
CONCLUDE: At the end of the week, our team does not present a dashboard. They send the CFO two things:
A single, perfect, fully-auditable master spreadsheet.
A one-page “Summary of Findings,” detailing the 15 critical errors they found and fixed that would have been missed by the client’s internal process.
With this single, low-cost MVPr, we have achieved something profound. We have proven that our new process (our solution mechanic) can deliver a 10x better outcome than the client’s current solution. We have done it with zero production code and minimal investment.
More importantly, our tiger team has, in effect, created the specification for the real product. Their folder of one-off Python scripts, their checklist for data validation, and their log of common errors is the detailed blueprint. We have de-risked the solution before investing in scalability.
The Option to Execute is therefore not a single, massive bet. It is its own small, staged investment. The “buy-in” is the cost of this 3-month concierge test. The “payoff” is an irrefutable, real-world case study and a validated process playbook.
This playbook—not the quantitative survey data—is what we will hand to our engineering team. We are not asking them to guess what to build. We are asking them to automate a process that we have already proven, manually, gets the job done.
We have one final step. We have a validated problem and a proven solution mechanic. Now, how do we design the business around it?
Guide to the Series












