Here is a 15-year-old (but useful) test about awareness.
I don’t want to spoil it for you, so please check the video first and then read the rest of the article.
The reason I find this video very helpful is that, especially when it comes to analytical thinking & decision making, there are many biases. Even after creating a detailed analysis or getting a sophisticated decision, we might still be blind and miss the obvious things.
I've shown this test to many people, yet only a few managed to spot the bear. Almost everyone managed to count the passes correctly, though.
There are many times I see analyses that seem right but still didn't manage to answer important questions.
Similarly, we might check a ton of parameters of a project but we managed to miss some of the most important ones.
Why is that happening? Here are a couple of reasons:
One of the most popular complex examples in the business world is how to measure incrementality. Your feature might generate thousands of interactions and conversions, but are those conversions incremental? A simple question with a complex answer.
There are many ways to measure incrementality. A common one is controlled experiments & holdout groups, but they require an initial setup. If you miss that setup, you can’t use them.
If you missed setting up controlled experiments, causal impact is a statistical method developed by Google. It can be useful to measure incrementality, from product features to marketing campaigns & sales efforts.
Imagine that you spent a significant amount of time building a feature, or you used a large portion of your marketing budget on a campaign. Your investment isn't only resource-based; it's emotional too.
Unconsciously, you'll try to find ways and metrics to prove to yourself and others that this initiative worked. In cases of great success or failure, things are straightforward, but most initiatives are not outliers.
A common bias for everyone. As humans, we tend to seek, interpret, and remember information in a way that confirms our pre-existing beliefs or hypotheses. Even if you're data-driven, the vast array of available metrics can be misleading. You might cherry-pick those that confirm your bias and make a decision that feels data-driven, even if it's not. There's a relevant quote from British economist Ronald Coase: “If you torture the data long enough, it will confess.”
Your expertise can sometimes make you blind, as you might focus only on what you know and inadvertently overlook other crucial aspects that fall outside your scope. A prime example of this is when one evaluates the success of a feature while completely neglecting user incentive campaigns. This oversight can happen simply because you might not be aware of the active campaigns, especially if campaigns are handled by the Marketing team, while product features are developed by the Product team.
Lastly, the organizational hierarchy plays a role. When a CEO or C-level executive makes a request, it carries more weight than a regular stakeholder's request. Our attention tends to shift towards expedient delivery (e.g., "as soon as possible" or "needed yesterday") rather than a comprehensive review and analysis of the situation.