top of page

A/B Testing - Data-Driven Design and Development

Updated: May 30, 2023

This probably isn’t the first time you’ve read about A/B testing, in fact, it could be something you may already be doing with your emails and social media updates.

But what exactly is A/B testing?



A/B testing, also known as split testing or bucket testing, is a statistical technique for comparing two versions of a website or app to see which one yields better results, it works by randomly dividing the study population into two or more groups and giving them alternative versions of the same "treatment.". Pretty easy concept to grasp, right?


Despite the widespread use of A/B testing across many fields, many professionals still misunderstand the concept. So, what do we get? Decisions with far-reaching consequences are made on flawed data. Fortunately, we have Robert "Bob" Sarnack, a renowned Agile Coach and QA/QC Analyst, who will help us talk about the proper usage, advantages of this approach, and how it can affect the development process to the best outcomes for our key performance metrics.


Why should we A/B Test?


There are several benefits of A/B testing. When we run many tests that directly compare a variant versus an existing experience, it lets us ask focused questions regarding upcoming changes to our website or app and then collect data on the results. By eliminating assumptions and facilitating data-informed decisions, testing helps businesses move the discourse away from "what we think" and toward "what we know." Every adjustment we make will be monitored for its influence on key performance indicators (KPIs), this will ensure that our efforts are quality and reasonably spent.


It also allows improvement in user engagement. This enables us to gather information about the outcomes, from which we can form a hypothesis about how and why particular features or experiences affect users' actions.


Three Common Mistakes to Avoid

  1. Basing your test on an invalid A/B Testing hypothesis - the hypothesis for the A/B test was formulated before the experiment was run. It's crucial to everything that comes after. Such as the things that need to be changed, the reason for its revision, and the expected outcome. If the initial hypothesis is incorrect, the likelihood of successful testing decreases.

  2. Implementing the same alteration from somebody else’s A/B Testing results -to get the most out of your business, you shouldn't try to mimic the success of others. Apply what you've learned from the case studies to develop an effective A/B testing plan for YOUR company. No two websites are the same. What works for them may not work for you. They won't have the same traffic, intended audience, or preferred method of optimization. So, don't use the successes of others as a standard for your own.

  3. Ignoring statistical significance - A/B Testing is doomed to fail if we ignore the statistical significance and instead rely on our intuition or biases when forming hypotheses or establishing interim benchmarks. If you are conducting a test and you don't pay attention to statistical significance and just go with your gut instead of doing the calculations, you may likely cut the test short before it has produced data that may be considered statistically significant. This also means that the outcomes will be inaccurate. Regardless of whether the tests succeed or fail, we must allow them to complete their full cycle to reach statistical significance.

Missing something?


So, did you miss something in this article? Check out the recorded session to learn more.


Key topics:

  • What is A/B Testing?

  • How does it work?

  • Why should we A/B Test?

  • What can we A/B Test?

  • Types of A/B Test?

  • A/B Testing Process

  • A/B Testing Calendar

  • Points to Scaling A/B Testing

  • Mistakes to Avoid

  • Challenges of A/B Testing

  • A/B Testing Iterative Cycle

  • A/B Testing and SEO

66 views
bottom of page