I use the OKRs method for planning in the last 9 years. So far, I have used them in Marketing, Product and Ops teams.
We tried several approaches and currently but what we observed is the following:
There are Objectives that we heavily worked on, without moving the needle.
There are Key Results that were achieved, without developing not a single Product commitment.
This is happening, simply because in the online marketplace industry, our Product is our business, which makes the process of matching objectives & key results, very challenging. Here are some real-life examples:
- Business seasonality There are quarters that our performance will drop due to seasonality. We can’t plan to work on an objective (ex. optimize search), to decrease the conversion rate of the search feature. Of course, you can check past performance and make a calculation that will highlight a “smaller decline” than last year, but still, this is weird.
- Other department efforts In efood, Marketing department efforts can influence a couple of Product metrics a lot. For example, when it comes to conversion rate, the Marketing performance team that handles the channels of SEM, SEO & Paid Social (Facebook ads) can influence a significant percentage of sessions, which is one of the two metrics that calculate the conversion rate of a session (the other one is conversions). That simply means that if the Marketing budget or strategy change drastically within a quarter, conversion rate will drop as well. An aggressive push on advertising, will drive metrics down, whereas a more conservative approach will keep product metrics in a regular state.
- Timing Imagine that you plan to develop feature X within the quarter. You forecast Y amount of new transactions from this feature, so you add the forecast to your OKR plan. You start developing the feature, but things went wrong on the implementation phase and you managed to deploy feature X in production 2 days before the quarter ends. Feature is live but key results are not in place.
The solution we currently deploy on this paradox, is that in some OKRs, we do not add performance data, but we use “deliverable” data. Let me clarify:
Let’s say that we want to develop a new feature that requires 10 new development changes in the platform in order to go live. On our OKRs process, we describe what are we trying to build, why are we trying to build it -without clarifying performance numbers- and the number of deliverables that should be implemented to go live. With this approach, we evaluate our OKRs, based on the deliverables and not on performance.
One note here is that it’s very tempting to switch all Product OKRs to this approach. After all, we worked very hard to develop the feature, isn’t this enough?
No. It’s not.
Remember, delivering projects and features in only half the job of a PM. A competent Product team should deliver business value and optimal user experience, in a timely manner, using the right technology approach.
OKRs is about WHY, WHAT & WHEN, don’t skip any part of it.
Why is this important?
Most of the professionals out there are using the OKRs framework and I’m sure that they have the same problems.
- Add deliverables as key results
- Do not overdo the 1st point, otherwise OKRs will lose their purpose.