Have you ever heard of the Milgram experiment? This behavioral experiment tested whether one participant would deliver electric shocks to another participant at a fatal level even as they listened to the second participant’s screams of pain. Shockingly (ahem), they did. Luckily, no one died. The other participant was an actor, and the shrieks the first participant heard were recorded.
This disturbing experiment got me thinking, how do you design a behavioral experiment, and how would that work regarding customer behavior in your experience? Luckily for me, my podcast partner, Professor Ryan Hamilton from Emory University, is an expert in this, so he shared his five rules for creating behavioral experiments in a recent episode that I thought I would share here well.
Before I share the rules, let’s review why you should care about setting one of these up properly. The most important reason is that experiments are the gold standard for determining causality. When set up correctly, we know that the actions taken in the experiment caused the outcome. However, for most other data sets—from sales numbers, customer surveys, web traffic reports, and so on—we have to infer causality from that data.
I have run into this problem in the past. I have presented to organizations that if they make my suggested changes to their experiences, it will bring in $X revenue. They then ask, “How do you know that the revenue increase is because of this change you suggest and not something else we have done?” My answer is very unsatisfying; “I don’t.” We can never know the causality of behavior at 100 percent certainty.
However, we should still try.
Per Professor Hamilton, there are three things needed to establish causality:
- Have a temporal precedent, meaning the cause comes before the effect.
- Establish correlation, which refers to combined movements of the cause and effect, meaning if I increase the cause, I should also increase (or decrease) the impact.
- Allow for no better alternative explanation, which is the tricky one.
The more data we have, the more we can confidently rule out alternative explanations. However, it’s always contingent. A new reason might explain the information better tomorrow, replacing the one we have as the causal explanation.
That said, the results of a good experiment will always be the best explanation we have that gets us closer to confidence. Sound experiment creation strategies will help us build a solid case for our causality. So, here are Professor Hamilton’s five rules for it:
The 5 Rules for Running Behavioral Business Experiments
- Define your metrics and what you’re going to measure.
- Establish comparison conditions.
- Randomize or get as close to randomization as you possibly can.
- Define the theory…or not.
- Foster a culture of experimentation.
Let’s take a closer look at each of these.
Rule #1: Define your metrics and what you’re going to measure.
The first thing you need to do if you want to run an experiment with your business is to figure out your dependent variables. Knowing what you want to measure before you design everything else is essential. For business, the ultimate goals are often distant, things like profitability or happier customers (that will lead to more profitability). However, for the experiment, profitability may be an unreasonable metric to use.
We want a dependent measure. It would be best if the measure happens close to when we run our experiment. It should also be simple to measure related to what we care about and sensitive enough to detect any changes.
Most organizations might choose a different measure than profitability. It could be revenue, Net Promoter Score®, customer satisfaction, etc. Whatever it is, your measure should move along a continuum. Which one you choose depends on many factors, like how easy it is to measure or how close it is to what you want (profitability).
Rule #2: We need comparison conditions.
There are a few things you need to have an experiment. One of the most significant is establishing different conditions to compare. So, first, you need to divide up people or things and then treat each group differently in ways that make sense for your question.
Many of you might think of a control group as an excellent example. However, there does not always need to be a control. For example, it could be multiple treatment groups, meaning each group gets a different experiment version. What is essential here is to establish other conditions to compare the outcomes.
Rule #3: Randomize or get as close to randomization as possible.
When you’re running experiments, randomization is like magic.
It solves so many problems.
For example, if you have two conditions, treatment and control, and groups from two geographic regions, the worst thing you can do is divide the conditions by the areas. People from those two areas might be fundamentally different from the other area somehow, which leaves the door wide open to a better alternative explanation for the causality in your experiment. Randomizing people individually into those different conditions takes care of that.
Randomization used to be challenging. Now, with technology, it is accessible in many cases. For example, if you want to experiment with different versions of a website or an email campaign, you can randomize the lists you send the links or email. However, it is more challenging to randomize two versions of a TV campaign because you have to work with the specific markets the ad reaches. Same for retail locations. If you have different geographic areas for brick-and-mortar locations, avoid assigning the same conditions to similar places. Instead, try to mix it up by assigning treatment conditions with something arbitrary, like even vs. odd store numbers, to get as close to randomization as you can.
The same goes for your groupings. If you are testing something on multiple customer segments, you don’t want to give one component all one condition and another part the other. Instead, you want to assign both conditions in both pieces to randomly specified individuals. Then, you have data you can use.
Rule #4: Define the theory…or not.
It’s become trendy, especially online, to do AB testing. The idea with AB testing is to see what works. For example, you change the background color on the website and see how customers respond. That is an experiment that produces valuable information.
However, without a theory about why it works, you don’t know why customers responded the way they did, which makes further insights around this difficult.
If we start with some hypotheses, we can build toward a theory. Then, we can build towards a more significant understanding of what’s going on, which can be more helpful.
For instance, we’ve done AB testing on different titles for emails.
The ones that present the urgency for the email as a loss perform better than the general titles. Without understanding our natural preference not to lose things over gaining things, which is part of Loss Aversion, you might not realize that. Instead, you might only think these titles seem to work better than the general ones.
When you have a hypothesis, you are testing something. So, if you hypothesize that Loss Aversion applies to this setting and find that it does, you have greater confidence for the next time you test. Or, if you don’t have time to test, you know that email titles positioned as potential losses do better than other titles. Then, you can write one that leverages our aversion to losing things and feel confident it will boost your results.
It doesn’t mean to say that it’s universally going to work. That’s why we test these well-worn theories because they don’t apply universally. However, in this setting, the principles behind Loss Aversion will work in these current circumstances.
Having a theory is a head start. A view gives you a more advanced place to begin further improvements than without one.
Rule #5: Foster a culture of experimentation.
So, this may seem like a strange rule, but it is crucial. The sad truth is that most experiments fail, meaning they fail to produce significantly different results.
However, these failures are crucial. If we didn’t run the experiment, there’s a good chance we would never know if our idea worked or not. But if we do run it and it fails, well, now we know. We know that it didn’t work in this setting, so we need to try something else.
The goal of experimentation is not to celebrate when experiments fail. On the contrary, we would rather they worked. But, if they do fail, the experimenter mustn’t be punished. After all, we now know something because we tried.
This one is vital in business today. Many organizations do not have a culture of experimentation. Furthermore, people don’t like a failure, so a culture of failure will never be okay.
However, knowing what doesn’t work is also valuable information as we advance. If we don’t experiment and recognize that it’s not working, we develop a culture where people hide things. If what we try doesn’t work, and we know failure of this sort is cause for punishment, our incentive is to obscure the truth. We’ll pretend it does work, which would be terrible because we cannot change and improve when we operate in those conditions. A culture where we try things, fail, and learn from failure is better for progress.
Hopefully, these five rules take your business experiments to another level. I encourage you to implement these in your experience improvement programs and customer strategy. Moreover, I hope you celebrate your losses or failures. Maybe you didn’t find what worked but at least now you know what doesn’t.
If you have a business problem that you would like some help with, contact me on LinkedIn or submit your pickle here. We would be glad to hear from you and help you with your challenges.
Reciba las últimas noticias de la industria en su casilla: