How to Do A/B Testing: 15 Steps for the Perfect Split Test
Planning to run an A/B test? Bookmark this checklist for what to do before, during, and after to get the best results.
When marketers like us create landing pages, write email copy, or design call-to-action buttons, it can be tempting to use our intuition to predict what will make people click and connect.
However, you’re much better off conducting A/B testing than basing marketing decisions off of a “feeling,” as this can be detrimental to your results.
Keep reading to learn how to conduct the entire A/B testing process before, during, and after data collection so you can make the best decisions from your results.
What is A/B testing?
How does A/B testing work?
A/B Testing in Marketing
A/B Testing Goals
How to Design an A/B Test
How to Conduct A/B Testing
How to Read A/B Testing Results
A/B Testing Examples
What is A/B testing?
A/B testing, also known as split testing, is a marketing experiment wherein you split your audience to test variations on a campaign and determine which performs better. In other words, you can show version A of a piece of marketing content to one half of your audience and version B to another.
A/B testing can be valuable because different audiences behave, well, differently. Something that works for one company may not necessarily work for another.
In fact, conversion rate optimization (CRO) experts hate the term "best practices" because it may not actually be the best practice for you. However, this kind of testing can be complex if you’re not careful.
Let’s review how A/B testing works to ensure you don’t make incorrect assumptions about what your audience likes.
How does A/B testing work?
To run an A/B test, you need to create two different versions of one piece of content, with changes to a single variable.
Then, you'll show these two versions to two similarly sized audiences and analyze which one performed better over a specific period (long enough to make accurate conclusions about your results).
A/B testing helps marketers observe how one version of a piece of marketing content performs alongside another. Here are two types of A/B tests you might conduct to increase your website's conversion rate.
Example 1: User Experience Test
Perhaps you want to see if moving a certain call-to-action (CTA) button to the top of your homepage instead of keeping it in the sidebar will improve its click-through rate.
To A/B test this theory, you'd create another, alternative web page that uses the new CTA placement.
The existing design with the sidebar CTA — or the "control" — is version A. Version B with the CTA at the top is the "challenger." Then, you'd test these two versions by showing each to a predetermined percentage of site visitors.
Ideally, the percentage of visitors seeing either version is the same.
Learn how to easily A/B test a component of your website with HubSpot's Marketing Hub.
Example 2: Design Test
Perhaps you want to find out if changing the color of your CTA button can increase its click-through rate.
To A/B test this theory, you'd design an alternative CTA button with a different button color that leads to the same landing page as the control.
If you usually use a red CTA button in your marketing content, and the green variation receives more clicks after your A/B test, this could merit changing the default color of your CTA buttons to green from now on.
To learn more about A/B testing, download our free introductory guide here.
A/B Testing in Marketing
A/B testing has many benefits to a marketing team, depending on what you decide to test. There is a limitless list of items you can test to determine the overall impact on your bottom line.
Here are some elements you might decide to test in your campaigns:
- Subject lines.
- CTAs.
- Headers.
- Titles.
- Fonts and colors.
- Product images.
- Blog graphics.
- Body copy.
- Navigation.
- Opt-in forms.
Of course, this list is not exhaustive. Your options are countless. Above all, though, these tests are valuable to a business because they're low in cost but high in reward.
Let's say you employ a content creator with a $50,000/year salary. This content creator publishes five articles weekly for the company blog, totaling 260 articles per year.
If the average post on the company's blog generates 10 leads, you could say it costs just over $192 to generate 10 leads for the business ($50,000 salary ÷ 260 articles = $192 per article). That's a solid chunk of change.
Now, if you ask this content creator to spend two days developing an A/B test on one article, instead of writing two posts in that time, you might burn $192, as you're publishing fewer articles.
But if that A/B test finds you can increase conversion rates from 10 to 20 leads, you just spent $192 to potentially double the number of customers your business gets from your blog.
If the test fails, of course, you lost $192 — but now you can make your next A/B test even more educated. If that second test succeeds, you ultimately spent $384 to double your company's revenue.
You can run many types of split tests to make the experiment worth it in the end.
A/B Testing Goals
A/B testing can tell you a lot about how your intended audience behaves and interacts with your marketing campaign.
Not only does A/B testing help determine your audience’s behavior, but the results of the tests can help determine your next marketing goals.
Here are some common goals marketers have for their business when A/B testing.
Increased Website Traffic
You’ll want to use A/B testing to help you find the right wording for your website titles so you can catch your audience’s attention.
Testing different blog or web page titles can change the number of people who click on that hyperlinked title to get to your website. This can increase website traffic.
An increase in web traffic is a good thing! More traffic usually means more sales.
Higher Conversion Rate
Not only does A/B testing help drive traffic to your website, it can also help boost conversion rates.
Testing different locations, colors, or even anchor text on your CTAs can change the number of people who click these CTAs to get to a landing page.
This can increase the number of people who fill out forms on your website, submit their contact info to you, and "convert" into a lead.
Lower Bounce Rate
A/B testing can help determine what's driving traffic away from your website. Maybe the feel of your website doesn’t vibe with your audience. Or perhaps the colors clash, leaving a bad taste in your target audience’s mouth.
If your website visitors leave (or "bounce") quickly after visiting your website, testing different blog post introductions, fonts, or featured images can retain visitors.
Perfect Product Images
You know you have the perfect product or service to offer your audience. But, how do you know you've picked the right product image to convey what you have to offer?
Use A/B testing to determine which product image best catches the attention of your intended audience. Compare the images against each other and pick the one with the highest sales rate.
Lower Cart Abandonment
Ecommerce businesses see an average of 70% of customers leave their website with items in their shopping cart. This is known as "shopping cart abandonment" and is, of course, detrimental to any online store.
Testing different product photos, check-out page designs, and even where shipping costs are displayed can lower this abandonment rate.
Now, let's examine a checklist for setting up, running, and measuring an A/B test.
How to Design an A/B Test
Designing an A/B test can seem like a complicated task at first. But, trust us — it’s simple.
The key to designing a successful A/B test is to determine which elements of your blog, website, or ad campaign that can be compared and contrasted against a new or different version.
Before you jump into testing all the elements of your marketing campaign, check out these A/B testing best practices.
Test appropriate items.
List elements that could influence how your target audience interacts with your ads or website. Specifically, consider which elements of your website or ad campaign influence a sale or conversion.
Be sure the elements you choose are appropriate and can be modified for testing purposes.
For example, you might test which fonts or images best grab your audience's attention in a Facebook ad campaign. Or, you might pilot two pages to determine which keeps visitors on your website longer.
Pro tip: Choose appropriate test items by listing elements that affect your overall sales or lead conversion, and then prioritize them.
Determine the correct sample size.
The sample size of your A/B test can have a large impact on the results of your A/B test — and sometimes, that is not a good thing. A sample size that is too small will skew the results.
Make sure your sample size is large enough to yield accurate results. Use tools like a sample size calculator to help you figure out the correct number of interactions or visitors you need to your website or campaign to obtain the best result.
Check your data.
A sound split test will yield statistically significant and reliable results. In other words, the results of your A/B test are not influenced by randomness or chance. But, how can you be sure your results are statistically significant and reliable?
Just like determining sample size, tools are available to help verify your data.
Tools, such as Convertize’s AB Test Significance Calculator, allow users to plug in traffic data and conversion rates of variables and select the desired level of confidence.
The higher the statistical significance achieved, the less you can expect the data to occur by chance.
Pro tip: Ensure your data is statistically significant and reliable by using tools like A/B test significance calculators.
Schedule your tests.
When comparing variables, keeping the rest of your controls the same is important — including when you schedule to run your tests.
If you’re in the ecommerce space, you’ll need to take holiday sales into consideration.
For example, if you run an A/B test on the control during a peak sales time, the traffic to your website and your sales make may be higher than the variable you tested in an “off week.”
To ensure the accuracy of your split tests, pick a comparable timeframe for both tested elements. Be sure to run your campaigns for the same length of time, too, to get the best, most accurate results.
Pro tip: Choose a timeframe when you can expect similar traffic to both portions of your split test.
Test only one element.
Each variable of your website or ad campaign can significantly impact your intended audience’s behavior. That’s why looking at just one element at a time is important when conducting A/B tests.
Attempting to test multiple elements in the same A/B test will yield unreliable results. With unreliable results, you won't know which element had the biggest impact on consumer behavior.
Be sure to design your split test for just one element of your ad campaign or website.
Pro tip: Don’t try to test multiple elements at once. A good A/B test will be designed to test only one element at a time.
Analyze the data.
As a marketer, you might have an idea of how your target audience behaves with your campaign and web pages. A/B testing can give you a better indication of how consumers are really interacting with your sites.
شاهد وأحصل على أعمالنا من منصة خمسات
محدثك د .عبد الغني فؤاد حسين 18 عاماً من واقع عملي حيث اعمل مستشار وظيفي ذو خبرة وله تاريخ حافل من العمل في مجال التدريب المهني وصناعة التدريب. ماهر في المبيعات والاستشارات التعليمية والمحاسبة وإدارة المشتريات وإدارة المخزون.
سأكون مستشارك الوظيفي وأقوم بتدريبك على أودو
After testing is complete, take some time to thoroughly analyze the data. You might be surprised to find what you thought was working for your campaigns is less effective than you initially thought
Before the A/B Test
Let’s cover the steps to take before you start your A/B test.
1. Pick one variable to test.
As you optimize your web pages and emails, you’ll find there are many variables you want to test. But to evaluate effectiveness, you'll want to isolate one independent variable and measure its performance.
Otherwise, you can't be sure which variable was responsible for changes in performance.
You can test more than one variable for a single web page or email — just be sure you're testing them one at a time.
To determine your variable, look at the elements in your marketing resources and their possible alternatives for design, wording, and layout. You may also test email subject lines, sender names, and different ways to personalize your emails.
Keep in mind that even simple changes, like changing the image in your email or the words on your call-to-action button, can drive big improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones.
Note: Sometimes, testing multiple variables rather than a single variable makes more sense. This is called multivariate testing.
If you're wondering whether you should run an A/B test versus a multivariate test, here's a helpful article from Optimizely that compares the processes.
2. Identify your goal.
Although you'll measure several metrics during any one test, choose a primary metric to focus on before you run the test. In fact, do it before you even set up the second variation.
This is your dependent variable, which changes based on how you manipulate the independent variable.
Think about where you want this dependent variable to be at the end of the split test. You might even state an official hypothesis and examine your results based on this prediction.
If you wait until afterward to think about which metrics are important to you, what your goals are, and how the changes you're proposing might affect user behavior, then you may not set up the test in the most effective way.
3. Create a 'control' and a 'challenger.'
You now have your independent variable, your dependent variable, and your desired outcome. Use this information to set up the unaltered version of whatever you're testing as your control scenario.
If you're testing a web page, this is the unaltered page as it exists already. If you're testing a landing page, this would be the landing page design and copy you would normally use.
From there, build a challenger — the altered website, landing page, or email that you’ll test against your control.
For example, if you're wondering whether adding a testimonial to a landing page would make a difference in conversions, set up your control page with no testimonials. Then, create your challenger with a testimonial.
4. Split your sample groups equally and randomly.
For tests where you have more control over the audience — like with emails — you need to test with two or more equal audiences to have conclusive results.
How you do this will vary depending on the A/B testing tool you use. Suppose you're a HubSpot Enterprise customer conducting an A/B test on an email, for example.
HubSpot will automatically split traffic to your variations so that each variation gets a random sampling of visitors.
5. Determine your sample size (if applicable).
How you determine your sample size will also vary depending on your A/B testing tool, as well as the type of A/B test you're running.
If you're A/B testing an email, you'll probably want to send an A/B test to a subset of your list large enough to achieve statistically significant results.
Eventually, you'll pick a winner to send to the rest of the list. (See "The Science of Split Testing" ebook at the end of this article for more.)
If you're a HubSpot Enterprise customer, you'll have some help determining the size of your sample group using a slider.
It'll let you do a 50/50 A/B test of any sample size — although all other sample splits require a list of at least 1,000 recipients.
If you're testing something that doesn't have a finite audience, like a web page, then how long you keep your test running will directly affect your sample size.
You'll need to let your test run long enough to obtain a substantial number of views. Otherwise, it will be hard to tell whether there was a statistically significant difference between variations.
6. Decide how significant your results need to be.
Once you've picked your goal metric, think about how significant your results need to be to justify choosing one variation over another.
Statistical significance is a super important part of the A/B testing process that's often misunderstood. If you need a refresher, I recommend reading this blog post on statistical significance from a marketing standpoint.
The higher the percentage of your confidence level, the more sure you can be about your results. In most cases, you'll want a confidence level of 95% minimum, especially if the experiment was time-intensive.
However, sometimes it makes sense to use a lower confidence rate if you don't need the test to be as stringent.
Matt Rheault, a senior software engineer at HubSpot, thinks of statistical significance like placing a bet.
What odds are you comfortable placing a bet on? Saying, "I'm 80% sure this is the right design, and I'm willing to bet everything on it" is similar to running an A/B test to 80% significance and then declaring a winner.
Rheault also says you’ll likely want a higher confidence threshold when testing for something that only slightly improves conversion rate. Why? Because random variance is more likely to play a bigger role.
"An example where we could feel safer lowering our confidence threshold is an experiment that will likely improve conversion rate by 10% or more, such as a redesigned hero section," he explained.
"The takeaway here is that the more radical the change, the less scientific we need to be process-wise. The more specific the change (button color, microcopy, etc.), the more scientific we should be because the change is less likely to have a large and noticeable impact on conversion rate."
7. Make sure you're only running one test at a time on any campaign.
Testing more than one thing for a single campaign can complicate results.
For example, if you A/B test an email campaign that directs to a landing page while you’re A/B testing that landing page, how can you know which change caused the increase in leads?
During the A/B Test
Let's cover the steps to take during your A/B test.
8. Use an A/B testing tool.
To do an A/B test on your website or in an email, you'll need to use an A/B testing tool.
If you're a HubSpot Enterprise customer, the HubSpot software has features that let you A/B test emails (learn how here), CTAs (learn how here), and landing pages (learn how here).
For non-HubSpot Enterprise customers, other options include Google Analytics, which lets you A/B test up to 10 full versions of a single web page and compare their performance using a random sample of users.
9. Test both variations simultaneously.
Timing plays a significant role in your marketing campaign’s results, whether it's the time of day, day of the week, or month of the year.
If you were to run version A during one month and version B a month later, how would you know whether the performance change was caused by the different design or the different month?
When running A/B tests, you must run the two variations simultaneously. Otherwise, you may be left second-guessing your results.
The only exception is if you're testing timing, like finding the optimal times for sending emails.
Depending on what your business offers and who your subscribers are, the optimal time for subscriber engagement can vary significantly by industry and target market.
10. Give the A/B test enough time to produce useful data.
Again, you'll want to make sure that you let your test run long enough to obtain a substantial sample size. Otherwise, it'll be hard to tell whether the two variations had a statistically significant difference.
How long is long enough? Depending on your company and how you execute the A/B test, getting statistically significant results could happen in hours ... or days ... or weeks.
A big part of how long it takes to get statistically significant results is how much traffic you get — so if your business doesn't get a lot of traffic to your website, it'll take much longer to run an A/B test.
Read this blog post to learn more about sample size and timing.
11. Ask for feedback from real users.
A/B testing has a lot to do with quantitative data ... but that won't necessarily help you understand why people take certain actions over others. While you're running your A/B test, why not collect qualitative feedback from real users?
A survey or poll is one of the best ways to ask people for their opinions.
You might add an exit survey on your site that asks visitors why they didn't click on a certain CTA or one on your thank-you pages that asks visitors why they clicked a button or filled out a form.
For example, you might find that many people clicked on a CTA leading them to an ebook, but once they saw the price, they didn't convert.
That kind of information will give you a lot of insight into why your users behave in certain ways.
After the A/B Test
Finally, let's cover the steps to take after your A/B test.
12. Focus on your goal metric.
Again, although you'll be measuring multiple metrics, focus on that primary goal metric when you do your analysis.
For example, if you tested two variations of an email and chose leads as your primary metric, don’t get caught up on click-through rates.
You might see a high click-through rate and poor conversions, in which case you might choose the variation that had a lower click-through rate in the end.
13. Measure the significance of your results using our A/B testing calculator.
Now that you've determined which variation performs the best, it's time to determine whether your results are statistically significant. In other words, are they enough to justify a change?
To find out, you'll need to conduct a test of statistical significance. You could do that manually... or you could just plug in the results from your experiment to our free A/B testing calculator.
For each variation you tested, you'll be prompted to input the total number of tries, like emails sent or impressions seen. Then, enter the number of goals it completed — generally, you'll look at clicks, but this could also be other types of conversions.
The calculator will spit out your data's confidence level for the winning variation. Then, measure that number against your chosen value to determine statistical significance.
14. Take action based on your results.
If one variation is statistically better than the other, you have a winner. Complete your test by disabling the losing variation in your A/B testing tool.
If neither variation is significant, the variable you tested didn't impact results, and you'll have to mark the test as inconclusive. In this case, stick with the original variation, or run another test. You can use failed data to help you figure out a new iteration on your new test.
While A/B tests help you impact results on a case-by-case basis, you can also apply the lessons you learn from each test to future efforts.
For example, suppose you've conducted A/B tests in your email marketing and have repeatedly found that using numbers in email subject lines generates better clickthrough rates. In that case, consider using that tactic in more of your emails.
15. Plan your next A/B test.
The A/B test you just finished may have helped you discover a new way to make your marketing content more effective — but don't stop there. There’s always room for more optimization.
You can even try conducting an A/B test on another feature of the same web page or email you just did a test on.
For example, if you just tested a headline on a landing page, why not do a new test on body copy? Or a color scheme? Or images? Always keep an eye out for opportunities to increase conversion rates and leads.
You can use HubSpot’s A/B Test Tracking Kit to plan and organize your experiments.
يمكنك أن تصبح مسوقًا رقميًا: اكتساب المهارات الأساسية والشخصية لمهنة تسويقية تعتمد على التكنولوجيا
ستتعلم المهارات الصعبة الضرورية لمهنة التسويق
How to Read A/B Testing Results
As a marketer, you know the value of automation. Given this, you likely use software that handles the A/B test calculations for you — a huge help. But, after the calculations are done, you need to know how to read your results. Let’s go over how.
1. Check your goal metric.
The first step in reading your A/B test results is looking at your goal metric, which is usually conversion rate.
After you’ve plugged your results into your A/B testing calculator, you’ll get two results for each version you’re testing. You’ll also get a significant result for each of your variations.
2. Compare your conversion rates.
By looking at your results, you’ll likely be able to tell if one of your variations performed better than the other. However, the true test of success is whether your results are statistically significant.
For example, variation A had a 16.04% conversion rate. Variation B had a 16.02% conversion rate, and your confidence interval of statistical significance is 95%. Variation A has a higher conversion rate, but the results are not statistically significant, meaning that variation A won’t significantly improve your overall conversion rate.
3. Segment your audiences for further insights.
Regardless of significance, it's valuable to break down your results by audience segment to understand how each key area responded to your variations. Common variables for segmenting audiences are:
- Visitor type, or which version performed best for new visitors versus repeat visitors.
- Device type, or which version performed best on mobile versus desktop.
- Traffic source, or which version performed best based on where traffic to your two variations originated
A/B Testing Examples
We’ve discussed how A/B tests are used in marketing and how to conduct one — but how do they actually look in practice?
As you might guess, we run many A/B tests to increase engagement and drive conversions across our platform. Here are five examples of A/B tests to inspire your own experiments.
The Ultimate Guide to Landing Page A/B Testing
In this post, we’ll cover the ABCs of A/B testing your landing pages. We’ll look at the benefits of testing, what to expect from it, what you should consider when deciding what to test, and how to put it all into practice right inside the Leadpages Drag & Drop Builder. By the end of the post, you should have a good understanding of the basic mechanics of A/B testing and feel prepared to start running your own tests with confidence.
So first up, a quick refresher on A/B testing.
What is landing page A/B testing and how does it work?
A/B testing (sometimes called “split testing”) is a marketing strategy that pits two slightly different variations of the same page against one another to see which one yields the highest conversion rate.
For example, while you’re building your landing page you might wonder how big your call-to-action (CTA) button should be. Or if a different headline would convert better. Or would an on-page countdown timer help or hinder your sign-up rate? For all these questions and others like them, following your intuition or web design best practices will only get you so far. To discover the truth of what really works, you need to approach it like a scientist: you need to test.
The way it works is pretty simple:
- Create a modified version of your landing page containing a single change—the “variable”—that you think could positively impact your conversion rate (a bigger CTA button, different headline, countdown timer, etc.).
- Randomly assign half your landing page visitors to the original version and the other half to the modified version.
- Let the test run until you have an adequate sample size and compare the data from both versions of the page to see if your changed landing page had a positive or negative impact on your conversion rate.
- If the change is positive, you adopt the modified version as your new, go-to landing page. If the change had a negative impact, continue using the original.
It’s important to appreciate that A/B testing isn’t a one-and-done type of activity. It’s an ongoing process that involves making incremental changes to your page in a bid to fine-tune your campaign for maximum conversions. Every test you run builds upon the test that came before it. Even a negative result is useful in that it gives you a better idea of what won’t work when it comes to designing your next test.
You’ll often hear A/B testing being mentioned in the same breath as multivariate testing. The two methods are quite similar but have some important differences. While A/B testing compares two different versions (A and B) of a landing page by changing one variable at a time, multivariate testing instead compares multiple variables at a time.
Why do we do A/B testing?
There’s an old military saying that goes something like this: “No plan survives first contact with the enemy”. The idea is that no amount of planning can fully predict how things will pan out when your plan is eventually put into action.
The same basic logic applies to your landing pages (although we don’t advocate thinking of your visitors as enemies!). Even your best guess of what will make your visitors most likely to convert is, in the end, just a guess. There are likely factors you haven’t considered and what you thought would work might pan out differently in practice. That’s why roughly 60% of companies perform A/B tests on their landing pages.
The beauty of A/B testing is that it ties the success of your sales and marketing campaigns to hard data instead of mere guesswork. By continually testing your assumptions, you end up discovering exactly what drives your audience to take action so you can keep improving the conversion rate of your landing page.
With that in mind, here are some major benefits of A/B testing:
Enhanced user experience
When someone visits your landing page, chances are they have a specific goal in mind. Maybe they just came to learn more about your product or service, maybe they’ve already decided they want to buy from you, or maybe they’re curious about who you are and what you offer.
Whatever their goal, it’s important to make their experience on your page as frictionless as possible. Confusing copy, jarring color schemes, and hard-to-find sign-up buttons are all potential stumbling blocks that could detract from the user experience and, in turn, damage your conversion rate.
A/B testing helps you identify these kinds of problem areas so you can adapt and create a more free-flowing user experience.
Low-risk, high reward
Since A/B testing is an incremental process, with only single changes being made at each step, the risk of your conversion rate falling off a cliff during any given test is relatively minimal. And even if you do find that changing a certain variable causes a significant dip in your conversions, you can simply change it back and use that knowledge moving forward. And of course, if the change improves your conversion rate, then you’ve found something worth keeping!
Understand your audience better
The discoveries you make by running A/B tests will improve your understanding of what drives your target audience’s behavior. And the better you understand your target audience, the better your future guesses will be when it comes to improving your user experience and conversion rates even further.
You can also use your findings in other areas of your business to further tailor your content, products, and services to your audience’s preferences.
Lower bounce rates
Your bounce rate is the rate at which visitors land on your page and leave without exploring any other pages on your website. It goes without saying that the more pages they visit, the better chance there is they’ll convert. As you’re performing your landing page A/B test, see how the different variables affect your bounce rate and take that into consideration when choosing a winner.
Increase conversion rates
Even small changes can have a massive impact on your conversion rates—sometimes by as much as 300%. That’s why A/B testing is so important. By testing every element of your landing page and finding what drives the best results, you’ll eventually be left with a fully optimized landing page that converts at a crazy high rate.
More Revenue
A lot of businesses create new offers, products, and services in order to increase revenue. But since it takes a fair amount of time and money to generate new, high-quality traffic, it’s much more effective to get the most out of the traffic you already have. By using A/B testing to optimize your sales pages, you’ll increase your conversions which in turn will increase your revenue—without having to come up with any new offers.
Examples of landing page A/B testing success
شاهد وأحصل على أعمالنا من منصة خمسات
سأقوم بتصميم تخطيط الكتاب الداخلي ، والتنسيق مع الغلاف هل تبحث عن مصمم داخلي محترف ومبدع لكتابك الملون؟ لا مزيد من البحث! أنا هنا لمساعدتك في مهارات التصميم من الدرجة الأولى. سأصمم تصميمًا داخليًا جذابًا لكتابك الملون يجذب انتباه جمهورك المستهدف