How to decide how much to invest in prototyping and testing
The reverse pyramids of data accuracy
Most PMs learning how to manage products tend to read more about “PMng” and less about the inner works of the roles orbiting the product manager (research, design, product marketing, etc). This ends up impacting many of your decisions. At early stage because you need to do the role. At a later stage, because you influence how much the team invests its time.
One of the areas I’ve struggled early on was prototyping+testing. Specifically: deciding how much to invest before jumping to a solution. I would either spend too much time (too many prototypes, a lot of fidelity, high variations of solutions), or too little (straight to high fidelity, sometimes not even testing). Understanding the sweet spot taught me the “𝐫𝐞𝐯𝐞𝐫𝐬𝐞 𝐩𝐲𝐫𝐚𝐦𝐢𝐝 𝐨𝐟 𝐝𝐚𝐭𝐚 𝐚𝐜𝐜𝐮𝐫𝐚𝐜𝐲”. It goes like this:
- When I tested 𝐡𝐢𝐠𝐡 𝐟𝐢𝐝𝐞𝐥𝐢𝐭𝐲 𝐩𝐫𝐨𝐭𝐨𝐭𝐲𝐩𝐞𝐬, 𝐚𝐜𝐜𝐮𝐫𝐚𝐜𝐲 𝐨𝐟 𝐝𝐚𝐭𝐚 𝐰𝐚𝐬 𝐪𝐮𝐢𝐭𝐞 𝐡𝐢𝐠𝐡. By accuracy of data, I mean how close the research data points were when compared to production data points (post-shipping).
- When I tested 𝐥𝐨𝐰 𝐟𝐢𝐝𝐞𝐥𝐢𝐭𝐲 𝐩𝐫𝐨𝐭𝐨𝐭𝐲𝐩𝐞𝐬, 𝐚𝐜𝐜𝐮𝐫𝐚𝐜𝐲 𝐨𝐟 𝐝𝐚𝐭𝐚 𝐰𝐚𝐬 𝐥𝐨𝐰𝐞𝐫 (i.e. often reality was sufficiently different to make me question the effectiveness).
- But when I tested 𝐥𝐨𝐰 𝐟𝐢𝐝𝐞𝐥𝐢𝐭𝐲, I realised we got many more ideas and solutions, 𝐦𝐨𝐫𝐞 𝐝𝐚𝐭𝐚 𝐨𝐧 𝐭𝐡𝐞 𝐬𝐚𝐦𝐞 𝐚𝐦𝐨𝐮𝐧𝐭 𝐨𝐟 𝐭𝐢𝐦𝐞. More options.
- With 𝐡𝐢𝐠𝐡-𝐟𝐢𝐝𝐞𝐥𝐢𝐭𝐲, we could only produce 𝐚 𝐟𝐞𝐰 𝐞𝐱𝐩𝐞𝐫𝐢𝐦𝐞𝐧𝐭𝐬 𝐢𝐧 𝐭𝐡𝐚𝐭 𝐞𝐪𝐮𝐚𝐥 𝐚𝐦𝐨𝐮𝐧𝐭 𝐨𝐟 𝐭𝐢𝐦𝐞, so I’d better choose wisely. When I didn’t, it ended up being a waste of time and money.
By understanding ☝️ I started to realize the decision of how much the team would invest in prototyping/testing would be directly proportional to the risk of my decision:
1. 𝐇𝐢𝐠𝐡 𝐫𝐢𝐬𝐤 𝐨𝐟 𝐚 𝐧𝐞𝐠𝐚𝐭𝐢𝐯𝐞 𝐨𝐮𝐭𝐜𝐨𝐦𝐞 𝐢𝐟 𝐚 𝐰𝐫𝐨𝐧𝐠 𝐝𝐞𝐜𝐢𝐬𝐢𝐨𝐧 = 𝐡𝐢𝐠𝐡𝐞𝐫 𝐟𝐢𝐝𝐞𝐥𝐢𝐭𝐲
2. 𝐋𝐨𝐰 𝐫𝐢𝐬𝐤 𝐨𝐟 𝐚 𝐧𝐞𝐠𝐚𝐭𝐢𝐯𝐞 𝐨𝐮𝐭𝐜𝐨𝐦𝐞 𝐢𝐟 𝐚 𝐰𝐫𝐨𝐧𝐠 𝐝𝐞𝐜𝐢𝐬𝐢𝐨𝐧 = 𝐥𝐨𝐰𝐞𝐫 𝐟𝐢𝐝𝐞𝐥𝐢𝐭𝐲
If the higher the fidelity, the more accurate the data, then I could de-risk depending on the scenario. If something carried low risk, then “saving” a bit (and even experimenting more) would be the way. Here are two examples in the extremes:
- Changing a core user experience flow? High risk. Go for more accuracy, since a mistake can break your business.
- Testing a new revenue stream? If core business is unaffected, then go for low accuracy. If it fails, it’s as if you never built it.
The sweet spot deeply depends on the risk of your decision. Ask yourself “if this goes wrong, how bad can it be?”.