Thinking About Your Negative Metrics
“Negative” metrics — you might prefer the term “De-optimization Metrics” — can be just as important to your continuous optimization efforts as your positive ones. The purpose of a negative metric is to isolate for you the deleterious effects you may inadvertently be having on your positive metrics.
A negative metric is not necessarily something you want to get less of. For example, one might mistake “reduce bounce rate” as a negative metric — but really, it’s just a more convenient way to refer to the positive metric of “increasing the un-bounce rate.” Rather, a negative metric is something you look at to ensure that, when you have success with your positive metrics, you aren’t penalizing yourself elsewhere. I covered this earlier when I discussed the value of nothing.
If conversions go up (a positive metric) but qualified leads go down, you want to know this. If the qualified leads go down with lesser impact on the company than the conversion rate going up is worth, then the company might decide that’s okay. However, if the quality of the leads drops such that you’re losing more revenue than the increased conversion rate is worth — well, then the company might decide that the increased conversions weren’t worth it.
Or perhaps your company is trying to decrease bounce rate (a positive metric, as described above) with an eye toward boosting ad views on your pages. You might achieve that, but inadvertently cause other problems that detract from your ultimate goal. You can’t know these things if you’re not measuring them, right?
Real-Life Lessons In Identifying Negative Metrics
This post focuses on thinking about negative metrics in real-life situations: what to look for, what direction they should move in if you’re doing well or doing poorly, etc.
A great way to practice thinking about negative metrics is when you yourself have a negative customer experience. After all, you might as well as turn it into something positive, even if you’re not the one in charge of measuring. I’ll use the anecdotal story that inspired this article. This one is easy since it’s about the cable company, and cable companies are always great places to learn what not to do.
As we go along, think about your own customers and their experiences with your website, your marketing outreach, your call center and your retail presence (if any). Can you identify negative metrics you should be tracking?
I’m betting most readers will recognize the pain point in my anecdote: the other day, the cable company managed to call me — get this — seven times. Seven. Each call was about getting me into a different package of services. Negative metric #0 (so obvious it isn’t even worth counting as #1): “How many times per day/week/month do we contact the customer without his contacting us first?”
Negative Metric #1
The first time, I was happy, because I watch very little broadcast TV and wouldn’t mind hearing about a way to save money. Who wouldn’t? After confirming that I am who I am — which begs the question, “Why are you calling me if you don’t know who I am?” — the agent asked if she could put me on hold “for a few seconds” so she could look up my account and find what offers I’d be eligible for.
Now, my first reaction was, “Why are you calling me and then looking up offers for me? Why not have the offers ready to go as soon as you reach me?” But a few seconds seemed trivial enough, so I said yes. After 60 of those “few” seconds, I hung up. Negative metric #1: “Of the total time spent with a customer on the phone, what percentage of that represents hold time?” That’d be an interesting metric, wouldn’t it?
Here’s a more subtle one: “What percentage of hold time comes from getting a response to a customer question, versus what percentage comes from us (the company) asking to put the customer on hold?”
You don’t want to penalize your call center staff for answering customer questions, but you do want to penalize the call center (systemically; not the agents) if the system itself causes the agents to put a customer on hold for information that should be readily available. So you’re looking at these negative metrics in light of the positive metric: “customers who took advantage of an offer we contacted them about.”
Negative Metric #2
An hour later, another agent called me and started the same spiel. This time I interrupted and asked to be put on the no-call list. No problem, apparently — except that the astute reader will note, “Wait, you said they called you seven times!” You needn’t be a genius to realize that being added to the no-call list didn’t take. So, negative metric #2: “How long does it take you to stop contacting customers after they tell you to leave them alone?” If your answer is “up to 30 days to take effect” then, congratulations, your company is a corporate stalker.
If I can ask them to shut off my cable and they can do that in an hour or less, surely they can flag my account for no calls in approximately the same period of time. What’s the negative metric here? “After a customer requests us not to call them, how many times do we do so anyway?” And, “Of that group who received calls after asking to but put on the no-call list, what’s our churn rate for those who got one extra unwanted call? Two? Three?”
Measuring this results in a learning: “What’s the correlation between stalking our customers and their complaining/stopping/reducing their avg $ value to us per month?” Everyone reading this knows the individual pieces making up these metrics exist — good companies put it together so they understand their negative metrics and work to reduce them; bad companies never even put the puzzle together.
Negative Metric #3
Third call, same thing. Except this time the agent was someone who spoke two different languages, but was fluent in neither. Negative metric #3: “What ratio of ‘hang-ups’ or ‘customer-didn’t-do-what-we-wanted’ occurs for native speaker agents calling our customers versus our ‘multi-lingual’ agents?” If you don’t have a problem, then this metric should be approximately equal across both groups.
Fourth call. Now I’m getting into the rhythm and really starting to think up ways to torture this process. I wait through the “several seconds” (apparently in 2013, “several” and “78″ are equivalent) it takes for the fellow to put me on hold and return with offers, and he tries to get me into a Triple Play (this is a package in which they provide cable + Internet + digital phone service). As a bundle, they can offer this at a lesser price. I don’t need a home phone (mobile only, here!) so I’m not really interested in this package, even though it’s maybe $10 cheaper per month than what I’m currently paying.
The agent seems surprised I’m not interested in saving $10 and rattles off to me the “value” of the services which, if purchased separately, would be far in excess of what I’m currently paying. Nice try, but all you’re doing is convincing me that your prices are already inflated, since you can discount them so dramatically. Kind of like the Persian rug business down the street that has been going out of business for 12 or more months. Jack up the price and then discount it heavily so people think they’re getting a bargain. It also inspires a test: “if we offer customers less of a discount, do fewer of them convert?” You’d think so, but if the discount is so large that even Grandma realizes she was over-paying to begin with, this test might turn out to be a surprise.
Negative Metric #4
“What about packages just for Internet + cable?” I ask. Oh no, no specials available for me since those are the services I already get. Negative metric #4: “What churn rate do we have for customers who find out that they have been paying far more for staying with the company than they would as a new customer?” This sort of bad impact on brand is poisonous and hard to measure.
I don’t know how you’d keep me from finding out, but surely you don’t need to throw the negative consequences of my loyalty in my face. Perhaps you could set up a test on a (very) small percentage of the customers where you explicitly say this to them and then look at the churn rate of that group compared to the control — where you don’t mention on a sales call with an existing customer that new sign-ups always get a better deal.
Wouldn’t it be an important thing to know? What if it turned out that churn among existing customers were 2-3-5x higher when they were told it was cheaper for them to discontinue service and then call back in a month and get a sweetheart deal? Wouldn’t understanding this impact how you’d go about future pricing, or at least modifying the sales script of call center agents?
Calls 5+6+7 were close repeats of the above, and I did get more negative metrics identified — which would be great if I worked at the cable company and gave a crap. I will say the final call convinced me to drop cable TV altogether, so now I just have Internet. I’m pretty sure that wasn’t the positive success metric they were going for when they first started calling me. Their negative metrics completely obscured their positive metrics. But, are they aware of this? Are you aware when this is happening at your company?
Some opinions expressed in this article may be those of a guest author and not necessarily Marketing Land. Staff authors are listed here.
(Some images used under license from Shutterstock.com.)
Analytics news and expert advice every Thursday.