Every quarter, or sometimes every two, we explore the development of new metrics that aren't found in our standard product management reporting or official business reports. These metrics are specific to our business and assist us in identifying patterns in the hypotheses we form over time.
In this post, I'll discuss a metric we've developed and use, called "flawless conversion."
Definition:
"Flawless conversion is a session where the user converted without experiencing any flaws.”
Methodology:
- Define the metric.
- Identify the events considered as flawed.
- Calculate the current sessions with conversions % that included at least one flawed event.
- Develop a report and alert system to monitor flawed orders.
One crucial step is defining the events that will be considered as flawed ones. We mapped the entire user flow and divided the flawed events into two categories:
- Hard events
- Soft events
Hard events are those that prevent progression to the conversion funnel or create a very negative user experience (UX), while soft events negatively affect the experience but don't block the conversion funnel. We began with hard events, as they have the most significant negative impact on user flow.
Hard event examples:
login.failed: User failed to login (segmented per login type).
payment.failed: User failed to pay (segmented per payment type).
order.cancelled: Order cancelled (segmented per cancellation type).
Another vital element is understanding why we developed this metric. We did so to quantitatively assess whether our product's user experience is satisfactory. Though we have metrics for events that negatively affect users, we lacked a way to link successful transactions segmented with negative events—until now.
The goal of this metric is to be as minimal as possible, reflecting a perfect conversion UX without any adverse UX events.
This metric does have limitations. For instance, it only examines successful sessions leading to conversion, not all sessions, and we can't measure all the hard events we'd like to (e.g., "app.crashed," meaning the app crashed) due to technical constraints.
Do you think this metric is helpful or not?
How would you expand it?
Are you aware of how many of your transactional sessions include negative UX events?
Relevant posts: