The 3 Step Process to Increasing Your Website Conversion Rate 

By Drift

increase website conversion rate

Editor’s Note: The following is a guest post. Interested in contributing content to the Drift blog? Email Molly Sloan at msloan@drift.com. 

Pssst. What’s the conversion rate optimization (CRO) secret?

The secret is that there is no secret.

There’s no tried and true A/B test you can run that is guaranteed to improve your conversion rate. There’s no CRO expert in the world who can look at your site and know with absolute certainty how to improve your conversion rates. Copying your competitors won’t help you, either.

Best practices are made to be broken, my friend.

There’s no magic to CRO.

The truth is a lot less exciting: it’s all about process. Conversion rate optimization is a scalable, repeatable process that works as a positive feedback loop. The best thing you can do for yourself (and your bottom line) is learn to perfect the step-by-step process.

Wait, what is a good conversion rate?

While it may be personally interesting to see how your conversion rates stack up against the rest of your industry, that information is useless.

Conversion rate optimization is incredibly contextual and nuanced. You have different pages, different value propositions, different customer segments, different calls to action, different traffic sources… the list is endless. It’s easy to see how comparing conversion rates among even your own assets can be problematic, let alone to aggregate industry averages.

Instead, focus on a question that you can actually take action on: is my conversion rate for this asset higher than it was 60 days ago? A good conversion rate is an improved conversion rate.

Step 1: Conversion Research

What will increase your conversion rate? That’s the big question. If you can’t rely on secrets and magic, what can you rely on? Conversion research that’s specific to your site. Conversion research that’s focused on your unique problems and your unique audience.

There are two types of conversion research: quantitative research and qualitative research.

Quantitative Research

This is the objective, numbers-driven research. Quantitative research gives you the “what”.

Analytics Analysis

Nothing beats rolling up your sleeves and diving into cold, hard analytics. The phrase, “In God we trust, all others bring data.” comes to mind.

Several articles could be written on analytics analysis alone. Here’s what’s super important, though: it requires actual, well, analysis. That might sound silly, but it’s worth noting as many marketers expect insights to jump off the screen and into their lap. It’s very easy to fall into the trap of reporting instead of analyzing.

For best results, go into this process with a list of questions you want answered. Stick to actionable questions. What will you do with the answer? If you’re not sure, refine the question until you are. You will come up with new questions along the way, of course, but this approach will help guide your exploration and prevent analysis paralysis.

Here are some other tips to keep in mind:

  • Data Integrity: Before doing any sort of analysis, confirm the integrity of your data. Are you collecting all of the data you need? Is the data you’re collecting complete and accurate?
  • Quality Assurance: Does your site work properly on every browser and on every device? Someone somewhere is still trying to access your site on a 2006 Motorola Razr, I promise. You can use your analytics tool of choice to figure out which browser versions and devices your visitors use most often, and identify potential issues.

For example, you might notice visitors using Internet Explorer 7 convert significantly worse than visitors using Internet Explorer 8, 9, 10 and 11. Write that down as a potential problem area to explore.

Note that you should always compare within browser and device families. For example, you shouldn’t compare FireFox to Internet Explorer or iPhone to Android.

Making sure your site works the way you intend it to and loads quickly (ideally in 3 seconds or less) should be a top priority.

  • High Volume, Low Value Pages: A blog post that ranks well organically, for example. Identifying potential issues on these pages will have a high return on investment.
  • Low Volume, High Value Pages: A contact form or checkout page, for example. Identifying potential issues on these pages will also have a high return on investment.

Finally, be sure to segment your data to uncover even more insights. Looking at data in aggregate is useful, but there’s so much hidden below the surface. You can segment your data to look at people who converted and people who didn’t. You can segment your data by day of the week or even the time of day. You can segment based on number of visits, geographic location, traffic source… you name it.

Slice your data, look at it from every angle! You’ll be surprised by what you find.

Heatmaps

There are two main types of heatmaps: scrollmaps and clickmaps. Scrollmaps show you how far down a page your visitors scroll. Clickmaps show you where your visitors click on your page.

Typically, heatmap tools use warm colors (red, yellow) to demonstrate high activity and cool colors (green, blue) to demonstrate low activity. The brighter the color, regardless of whether it’s warm or cool, the more activity.

Here’s the thing: heatmaps look more useful than they really are. They look nice in reports and during presentations. But they do have a few important uses:

  1. Scrollmaps can help you define your messaging hierarchy and identify the need for visual cues. What’s your most important message? Is it in an area of the page that gets attention? If you notice a drastic change at any point on the page, that may indicate the need for a visual cue (e.g. an arrow) to keep visitors scrolling. Alternatively, it may indicate a disinterest in the messaging at that point.
  1. Clickmaps can help you figure out what visitors want to click on, but can’t. Most of what you can learn from a clickmap is available to you via an analytics tool. But it’s interesting, in some cases, to see what people believe to be (or hope to be) links. For example, you might notice people trying to click the phrase “conversational marketing”. That could indicate they’re unfamiliar with the term and would like more context or maybe that they would like to see an example.

Qualitative Research

This is the subjective, people-driven research. Qualitative research gives you the “why”.

Heuristic Analysis

Heuristic analysis is the process of evaluating a site for usability issues and optimization opportunities. The process relies on principles, best practices and expertise, which means it is not an exact science.

The more experience you have with conversion rate optimization, the more accurate your heuristic analysis is likely to be. However, even the most experienced optimizer doesn’t have a crystal ball, and best practices are proven wrong time and time again.

When conducting heuristic analysis, it’s important to remain as objective as possible. The entire process is inherently subjective, so to avoid calling out personal opinions, it’s best to evaluate the site based on predetermined factors. For example:

  • Clarity: Is the value proposition clear? Does it motivate the visitor to take action? Is the action you want the visitor to take clear?
  • Relevancy: Is anything distracting visitors from the action you want them to take? Is anything detracting from the value proposition? Does the design and copy meet visitor expectations?
  • Friction: Is anything causing doubt or uncertainty? Is the action you want the visitor to take difficult to take? Is there any unnecessary complexity?

Heuristic analysis is best done in a group setting to further reduce personal biases, if possible.

Surveys

There are two main types of surveys: on-site surveys and customer surveys. An on-site survey appears to site visitors either after a set period of time or when exit intent is shown. A customer survey is sent out, typically via email, to paying customers.

Since on-site surveys appear to a lower intent audience than customer surveys, they’re typically short. You might ask one open-ended question or you might want to start with a yes / no question, then ask for an open-ended elaboration. Either way, you have to phrase your questions strategically.

When running an on-site survey, you want to focus on the friction. You might ask:

  • What’s the purpose of your visit today?
  • Is there anything holding you back from completing your task today?
  • Do you have any unanswered questions about this product?

Customer surveys can be a bit longer, which makes them a breeding ground for voice of customer copy (that’s when you take actual quotes from customers and use them to inform your site copy).

While it may occasionally be useful to survey repeat or long-time customers, conversion rate optimizers are typically more interested in hearing from recent first-time buyers. They’re more familiar with your customer journey, and they can more accurately recall their thoughts and feelings throughout the buying experience.

Where on-site surveys focus on the friction on your site, customer surveys should focus on the perceived value your product delivers. You might ask:

  • What problem does this product solve for you?
  • Why did you choose this specific product over similar products?
  • What was your biggest concern about purchasing this product?

Aim to collect a few hundred survey responses. That’s a somewhat arbitrary number, but your goal is to be able to identify patterns without sinking days and days into reviewing responses. Because you should be focused on open-ended questions, reviewing can be very time-consuming. It’s a delicate balance and 250-300 responses tends to be the point of diminishing returns, in my experience.

Interviews

When you conduct interviews, you’ll want to talk to both your employees and your customers.

For best results, start with employee interviews because they can actually help guide your customer interviews. You’ll want to start by talking to employees in the proverbial trenches: support and sales. These employees speak to your customers day in and day out—they’re a wealth of knowledge.

Focus on identifying common customer questions, frustrations, pain points, benefits and objections. How do your employees address those questions, frustrations, pain points and objections when they arise? What words do they use, specifically, when selling the benefits?

To complement the internal employee interviews, consider mining support logs and sales logs from the last 6 months as well.

Customer interviews are valuable for voice of customer copy, but they can also help you decide what products to build and which experiments to run. However, you get out what you put in. If you want valuable insights, you have to go into each interview with a strategic plan:

  1. Decide if you’re trying to define the problem or the solution. Interviews are valuable when you don’t have the answer to a pressing question or when you think you have the solution to an important problem. The former is generative (you’re trying to generate answers or solutions) and the latter is evaluative (you’re trying to validate your solution).
  1. Generate a list of questions based on your goal. The questions you ask will depend on your goal, your industry, even your geographic location. While it may sound basic, it’s common to use the 5 w’s and an h (who, what, when, where, why and how) to guide your question design. Just be sure to keep an open mind and make the candidate feel like the expert. It’s important to know that you can extract value from having your candidates perform tasks or role-playing exercises as well.
  1. Segment and screen interview candidates so you get the right people in the room. The most carefully crafted interview questions won’t be valuable unless you’re talking to the right people. Generally speaking, you’ll either want to speak to former, current or potential customers. But within each of those categories are many subcategories. For example, current customers who purchase more than once a month. Think critically about who you invite for an interview.
  1. Develop a report and extract the insights. Record the interview and have a second interviewer take live notes. We always overestimate the power of our own memory, so capture everything you can. Have the interview transcribed so you can get exact voice of customer copy. As you dive into the interviews, let go of any assumptions you have. If you don’t, you will simply end up confirming them. Start by looking at each interview individually, then zoom out and identify the recurring themes among all of the interviews.

When all is said and done, you’ll end up with a new set of questions for your next round of interviews and a list of opportunities uncovered.

User Testing

User testing allows you to watch as someone interacts with your site, often narrating their thoughts as they go. The longer you work on a site or at a company, the easier it is to overlook minor issues. You have the curse of knowledge. That’s why having someone unfamiliar with your site and your company come in with “fresh eyes” is valuable.

But you don’t want to collect opinions. You want to collect useful data.

That’s why you’ll have your participants complete a series of predetermined tasks. For example, you might ask that they find pricing details for your enterprise plan or purchase a product from your ecommerce store.

For best results, have every participant complete the same set of tasks. Where do they routinely stumble? What seems unclear to them? What user experience (UX) issues come up?

Whether you’re using a dedicated user testing tool or hopping on a video call, be sure your task list is perfectly clear. Run it by a friend or colleague before the user testing sessions begin. Keep it simple, keep it specific.

If you do opt to use a video conferencing tool vs. a dedicated user testing tool, stay as quiet as possible. Get comfortable with silences, and avoid answering questions or making comments. Your job is to simply watch, listen and record.

Session Replays

Session replays allow you to watch as real potential customers with real intent to spend real money interact with your site. The downside (compared to user testing) is that you don’t get to hear that precious, precious narration. The upside is that you are peeking over the shoulders of actual potential customers as they evaluate your value proposition and UX.

You’re looking for points of friction. Where do they get stuck? When do they leave a page only to return again? Where do they pause to read and where do they simply scan? How long does it take them to find what they’re looking for or complete a task?

Take notes as you watch the recordings and you’ll slowly start to notice patterns develop.

Step 2: Extract & Prioritize Insights

Congrats! You’ve just invested hours and hours into collecting quantitative and qualitative data. Now what?

As with analytics analysis, be careful not to fall into the trap of reporting instead of analyzing.

Take customer interviews, for example. What did you learn from each individual interview? What patterns exist among all of the interviews? How does that compare to the patterns that exist among internal employee interviews? How did the responses from support employees and sales employees differ? How do all of these patterns fit with the patterns identified during session replays? User testing? You get the idea.

Collecting the data in the first place is no small feat, but that’s truly only the beginning. The real work is in looking at all of the data you’ve accumulated and slicing it in a way that reveals insights. Then repeat, repeat, repeat.

You’ll end up with a big list of quick fixes (e.g. fix the UX in Safari 11), test ideas (e.g. which of these three value propositions would perform best?) and things to explore further (e.g. what’s causing the slow load time on the contact page?).

Tackling the list will seem overwhelming, which is why you’ll want to prioritize. There are a couple different prioritization frameworks you can borrow from:

  • ICE: This is an acronym for impact, confidence and ease. You will be asked: to estimate the impact, how confident you are about that impact estimate, and how easy it will be to implement. All three are rated on a scale from 1-10, one being the lowest. On one hand, this framework is very straightforward and accessible. On the other, it’s incredibly subjective, especially if you have multiple people working on conversion rate optimization.
  • PXL: PXL is another popular prioritization framework. It’s significantly less subjective because it’s more binary. Is it above the fold? Is the change noticeable in 5 seconds? Is it running on a high traffic page? These are the types of questions you’ll ask yourself. What’s interesting about PXL is that it’s customizable. So, if something is particularly important to you and your business (e.g. is this change on a paid traffic landing page?), you can easily include it as an evaluation factor.

Regardless of which prioritization framework you use, you will have a clear understanding of where to start implementing everything you’ve learned through conversion research.

Step 3: Implement Insights

It’s the moment you’ve been waiting for: it’s test time, baby.

But wait… should you really be testing? It depends.

There are many companies pouring time and resources into testing when they would get a better return on investment from doubling down on conversion research. Why? For starters, there’s more to testing than statistical significance. You need to reach a predetermined sample size.

For example, let’s say your current conversion rate on a landing page is 3% and you want to be able to detect a 5% effect. Plug those numbers into a simple sample size calculator and you’ll get: 204,493 per variation. In other words, you need 204,493 sessions for each variant. That means you need over 400,000 sessions total.

CRO A B Testing

That’s a lot of traffic for a landing page. You could decrease the required sample size if you’re willing to increase your minimum detectable effect (MDE), of course. For example, instead of being able to detect an effect of 5% or more, you would be able to detect an effect of 30% of more. But the likelihood of seeing large lifts from a single A/B test is small.

Now maybe you’re thinking you can leave the test running and eventually you’ll reach the required sample size. Makes sense, but then you start dealing with sample pollution.

The more time goes on, the more likely it is that people will delete their cookies, meaning they could reenter your test and receive a different variant. That would skew the results.

If someone switches from a mobile device to a laptop, they could reenter your test and receive a different variant. That would skew the results.

If the paid team turns on a big ad campaign halfway through your test and traffic spikes for the landing page, that would skew the results.

For these reasons (and many others), you should aim to run tests for only two full business cycles, whatever that may be for your business. So, the question is: can you get the required amount of traffic per variation in approximately two to four weeks?

For many companies, the answer is no (and that’s ok). Just because you can’t test, doesn’t mean you can’t optimize. You still have a huge, prioritized list of optimizations to make.

If you can test, here are a few things to keep in mind:

  • Run tests for full week increments.
  • Don’t peek at the results until the test is done.
  • Get a basic understanding of statistics (i.e. know what mean, variance, significance, regression to the mean, power, etc. all mean).

Conclusion

After implementation comes, you guessed it, more analysis. If you can test, know that test analysis goes so much deeper than win or lose. If you can’t test, do your best to quantify the impact of your optimizations based on historical data, even if it’s an imperfect solution.

After analysis comes archival. You want to save your results and learnings in a central repository. It is shockingly easy to run the same test twice, even at a small company. You’ll thank yourself as your team grows, too.

After archival comes iteration. Conversion rate optimization is a positive feedback loop. How can the test you just ran or the change you just implemented inform your next test idea or piece of research?

There’s no secret, there’s no magic, there’s no one behind the curtain. There’s just rigorous, disciplined process that scales and repeats. Conversion rate optimization isn’t sexy. It’s not quick. It’s not easy. But it is effective… every time.

Get a free, custom website conversion assessment with one of our Conversational Marketing experts today. Sign up here.