
What is A/B testing in digital marketing?
A/B testing (also known as split testing) is an important part of product experimentation, as it helps you make informed decisions so you can optimize your publication and content. In this guide to getting started with A/B tests, you’ll learn the basics of this essential marketing experiment

A/B tests can be used for a variety of product experiments. They’re sometimes called “split tests” because you split your traffic 50/50 to test two versions of one piece of content with changes to a single variable. By testing these two versions, you can analyze their performance so you can optimize content and increase ROI.
A/B testing fundamentals: Understanding the basics
What elements can you A/B test?
A/B testing is very flexible, but here are some key content types you should consider testing:
- Images: How do your readers respond to different versions of an image or design element? Test fonts and colors, images, graphics and icons. (👉Read more: Newsletter images: Best practices to follow)
- Calls to action: How do your readers respond to different calls to action? Utilize data and analytics to see which CTA generates more engagement.
- Content: What content is your audience most receptive to? Test subject lines, headers and subheaders, titles, copy and more.
- Placement: How do your readers engage with the content depending on where it’s placed? For example, do featured stories get read more if they’re linked at the bottom of an article, or to the side? Do different ad zones perform better than others?
When shouldn’t you run an A/B test
Because A/B testing is so flexible, sometimes you end up running tests when you shouldn’t. Here are some situations where A/B testing is not the best option:
- If your audience or average monthly website users is less than 1,000, you should consider waiting to test. Having a sample size that is too small can skew your results, rendering them inconclusive or even invalid. Why is this? A small sample size isn’t necessarily representative of your overall audience.
- You’ll be wasting your time if you test a change that is a no-brainer; this includes industry best practices or clear standards. (👉Read more: Indiegraf’s Guide to Newsletter Best Practices)
- If it’s broken, just fix it. You don’t need to A/B test something that isn’t working as intended. If your website has broken links or processes that dead-end or frustrate readers, don’t bother with A/B testing.
- If you’re adding something your readers have asked for, you likely don’t need to run an A/B test.
Steps in building an effective a/b split test

Step 1: Define the problem and determine a hypothesis
Before you set up and run your first A/B test, you will want to define a problem and then develop a hypothesis. Make sure your hypothesis is aligned with your publication’s business and editorial goals. A strong hypothesis should include three main parts: the variable, the desired result, and the rationale behind it.
For example, if you’re designing house ads you might want to test the color of the CTA button. Your hypothesis could be, “If we change the button color on our house ad, then more people will click on it because the yellow is a higher contrast than the green button”.
Step 2: Define sample size and test duration
Now that you know what you want to test, you need to determine how long to run the test and how many readers (aka sample size) you need to reach for your results to be statistically significant. Trying to calculate your test duration and sample size can be tricky, that’s a lot of math! But don’t worry — there are plenty of free calculators available that you can use (like this one here).
What is statistical significance in A/B testing?
Investopedia defines statistical significance as a determination that a relationship between two or more variables is caused by something other than chance. It is used to provide evidence concerning the plausibility of the null hypothesis, which hypothesizes that there is nothing more than random chance at work in the data.
As we mentioned earlier, our recommendation is to wait until you have an audience size of at least 1,000 readers or monthly website users before you start to conduct A/B tests. While this isn’t a firm rule, if your audience is smaller than this, you may have a difficult time getting enough responses to reach statistical significance.
There is no set duration for how long an A/B test has to run, but you should expect to commit a minimum of two weeks to each test. This can vary depending on how big the variation is. The smaller and less obvious the change, the longer you may need to run the test.
Step 3: Analyze A/B test results
Once you have finished running your A/B tests, refer back to your hypothesis. Based on the results, did you prove or disprove your hypothesis? If you were running your split test through an ESP or through A/B Testing software, check if a winner has been declared. Many platforms do a basic analysis for you. Typically, a winner will be declared if these two conditions are met:
- The A/B test has reached a significance level (or confidence level) of 90 percent or higher, and
- The minimum test duration has passed

You should also review the following metrics (if applicable):
- Sample size: How many users were included in your A/B test overall, and how large was each segment?
- Impressions: How many users saw your A/B test?
- Clicks and click-through rate: Of the users shown your A/B test, how many clicked on it? Are they more likely to click on one over the other?
- Conversions and conversion rate: If a user is shown an A/B test, do they convert into paying customers?
- Bounce rate: If a user visits a web page, is the following visited page on the same website, or do they leave?
- Uplift: The difference between the performance of the control and the challenging variations. For example, if one variation received 28 clicks and the other received 42, the uplift is 50 percent.
Remember, there’s no guarantee that your hypothesis will result in a winning test, no matter how well you research it.
Dealing with inconclusive A/B test results
Sometimes A/B test results will come back as inconclusive — and that’s okay! Don’t get discouraged. You can revise your hypothesis or your variant, and try again. An inconclusive answer can happen when the results of your test are too close to determine a clear winner. The next step is to make changes to your A/B test based on data from each experiment and continue to test until you find your ideal outcome.

Key takeaways
- Pick a single variable to test. Don’t test more than one variant per version.
- Align your hypothesis with your business goals. For example, if you’re trying to increase reader revenue, then A/B testing different calls to action might be a great option.
- Always run your variants simultaneously and ensure weighting, prioritization, etc. are all identical.
- Record your results using a spreadsheet or an online tool. This will allow you to analyze your data to determine the success of each A/B test.
- Don’t get discouraged if results are inconclusive at first. Revise your hypothesis and try again.
- Make changes to your A/B test based on data from each experiment, and continue to test different variations to find your ideal results.
Indiegraf has all the solutions to help meet your outlet’s digital advertising needs. We offer Indie Ads Manager, an integrated platform designed to streamline ad fulfillment, and strategic advertisement and sponsorship planning from our Indiegraf Experts team. If you’re interested in taking your publication’s advertising program to the next level, let’s chat! We are happy to help.


