On this episode of #Growth, host Matt Bilotti walks us through how to pick, plan and execute a growth experiment. Most crucial to getting started? Make sure the experiment you choose gives you a statistically significant dataset from which to start, run your experiment off of and ultimately base your conclusions on.
This experiment framework can apply to all parts of your business. Ready to get started with your own growth experiment? Tune into #Growth now.
Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️⭐️ review and share the pod with your friends! You can connect with Matt Bilotti on Twitter @MattBilotti.
Subscribe & Tune In
In This Episode
0:16 – Overview of the episode: how to run a growth experiment
0:40 – Example using activation
2:05 – Resources for statistical significance
2:17 – When running an experiment, what should you know upfront?
3:30 – Be in the shoes of the lever you’re trying to move
4:04 – Matt’s strategy of tearing down of a product
4:30 – Brain storm ideas based on tear down
4:44 – Make big ideas for big changes
5:25 – Outline next steps for making the changes happen
5:45 – Use scientific method for your experiments
7:20 – Find a way to create a control group
8:21 – Points to remember: have a good hypothesis, have an internal process for running experiments, and don’t get wrapped up in the numbers.
Matt Bilotti: What’s up, what’s up? Welcome to #Growth. It’s your boy Matty B, AKA the guy with the beard from the videos. Today, we’re going to be talking about how to pick, plan, and execute a growth experiment. All the things that you need to know, a step by step in terms of how to run a growth experiment for all of you out there that are getting started with this, or saying, “Maybe growth is something that I should be thinking about.” Let’s talk about what it means to make that happen.
I want to start off with a bit of an example around activation. Let’s say that we wanted to do an experiment to get people activated. People sign up on your website for your product, and then they take some kind of action to be activated. An example that I’ve used in the past is, if it were Dropbox, someone would be activated when they upload their first file. Or, if it is Gmail, they would be activated once they send their first email.
If you’re picking out an experiment to do, you have your lever. It’s activation, in this case. The first thing to know, and this is super critical. I learned this the very hard way, when I started doing growth experiments. It’s that, statistical significance matters. Scary, big number, mathematical thing. To me, it was like, “Whoa, what does that exactly mean, and how do I measure that?” To really simplify this down, if you’re running an experiment and you only have a few data points, like you could only run an experiment on 40 accounts. You’re not going to get anything from it, any of the results that you’re going to get are meaningless because there’s not enough data there to say that this thing is actually caused by the changes that you made, or the experiment that you ran.
Once you have a sense of statistical significance, so there are a few really great resources out there. ABTestCalc.com is a good one. Basically the way to think about this is, if you’re going to run an experiment, how long does it take you to get to a point where you can reasonably say, this thing worked? I’ve made a lot of mistakes when we started doing experiments saying, “Yeah, it’ll be fine. We’ll have enough data. We’ll be able to pull some results from it.” Only to run an experiment, be two and half weeks in, we spent a lot of time and energy. Only to look around then say, “Oh wow, we’re going to have to run this thing for two more weeks to get data out of it.” That is a bad place to be. Just have a sense up front of, is this thing going to provide you enough data?
Now that you have a sense that you’re going to have enough data around something, go ahead and do that thing, right? If you’re trying to get people to be activated, go sign up for your product and see what the experience is like for them when they’re going through that same scenario, right? Maybe you’re not sending any emails. It’s surprising how many times that you might realize that you have people signing up for your product, and you’re not sending them emails. Go click around in the product, see if there’s anything that points you in the right direction, right? Go be in the shoes of whatever lever you’re trying to move. If it’s revenue for example, so people trip a limit in your product, and they get to a point where they should pay. Go do that. Sign up for the product, put a bunch of data in, strip the limit somehow, and then see what happens.
Does anything prompt you to go upgrade? Is the sales person reaching out? Is there a state in the product that actually locks something down until you go upgrade? Go be in the shoes, and once you’re in those shoes, the way that we like to do it here is we do tear downs of our own product. We go through, we take screenshots the whole way through, focused on whatever experiment we’re thinking of running. We take screenshots and say, “This thing was confusing. I didn’t understand what this thing was supposed to be. Nothing explained that I was supposed to do this thing next.” Right? Put that all down.
Then, spend 10 to 15 minutes just thinking about ideas. Write them all down based on the tear downs that you’ve done, the ideas that you might have, of stuff that you saw in other products. Go ahead, write ’em all down, and then look at it again and tell yourself that you’re probably not thinking big enough. It might be really tempting to make a small tweak, like changing where this button is on the dashboard. But, ultimately those are not going to be the kinds of experiments that are going to get you the real big changes, and data that you need to know if this thing worked or not.
Think bigger, right? Go back through that list and say, “How can we 10X this? How can we 10X this idea?” Right? Maybe instead of moving a button in the dashboard, you just remove all the content in the dashboard, and put a button, right? That button is the only thing that people see, and it’s the only thing that you’re driving them through. Next, you need to outline what you need to do to make this happen. Think about this as a grammar school science project, right? You’re going to use the scientific method. At Drift, for each experiment, we write a growth one pager, which basically outlines the scientific method for that experiment.
The sections of it are, observations, hypothesis, experiment, so what is the experiment? Background and context, general requirements, concepts and references, experiment size and control, and what are the metrics that you want to track that will tell you success at the end of the day? Those are the things, observations, hypothesis, experiment, background, context, requirements, concepts and references, experiment size and control, and success metrics.
Now, you might be saying, “Whoa, that’s a lot of things. How much time should I be spending on this?” Generally, putting together one pager, should take no more than an hour of time. One really important note is, the hypothesis, you need to put a number into that. I think it’s really easy to just say, “Well if we move this button the dashboard, more people will click it.” Right? That’s not something that you’re really going to be able to disprove or prove the way that you need to . put a number on it. If we do this, then we will get five percent more people clicking on that button.
From here, you have your one pager. It’s built, looks great. You didn’t spend too much time on it, but you got some stuff together. Send it over to the rest of the team, that would be the one implementing it, and working on it. So, the designers, and the engineers. Then, go ahead, work to build it out. This one is really, really tricky, and we’re going to talk about this in the next episode. It’s finding a way to create a control group. A control group being the group of people that you are not introducing the change to, so that you can measure the success of the change or experiment that you ran. Super, super important. Been burned on that a lot. We’ll talk about it another time.
Then, once you build the experiment, turn it on, and don’t touch it. It’s really tempting, especially if you’re coming from any kind of super iterative background. Maybe that’s in sales where you change stuff on every single call, or a product where you’re iterating on the product with an experiment. Set it, and let it go. If you start touching it, and changing it, and saying, “Oh, well maybe we can change the button color instead of just moving it.” Then you’re introducing new variables, and it’s going to make it really, really hard to know if the experiment actually worked or not.
To bring this all home, there’s a couple really important points here. One is, that you have to have a really great hypothesis. Put yourself back in your 10 year olds shoes, remember what it was like to prove that black cloth makes something hotter, or that feeding a plant lemonade makes it more likely to grow, whatever it was that you did during your science fair. I think those are some of the ones that I did. Put yourself in those shoes. Have a really, really solid hypothesis.
Have an internal process for this, right? With us, it’s these one pagers, the documentation around the experiment. It’s a really, really important thing to have that reference point to come back to. Don’t get too wrapped up in making sure that it’s perfect, but have something down there to say, “Here’s the thing that we’re testing, here’s roughly what it’s going to look like, here are the numbers that we’re going to want to move, and here are the metrics that we’re going to measure as a result.”
At the end of the day, the most important point is, you’re testing that thing, you’re going to run an experiment. Don’t get so wrapped up in the numbers. It’s okay to be imprecise with this stuff, especially when you’re just getting started. Over time you’ll learn more of a process, and more of a down pat way to build out and run experiments. All right, thanks so much for listening. Really appreciate it. Six stars only, seven stars … DC might be saying, “Eight stars only,” these days, who knows. Thanks again, catch you next time.