Statistics Lesson 43 – A/B Testing | Dataplexa

A/B Testing with Proportions

In real-world decision making, organizations constantly test ideas:

  • Which website design converts better?
  • Which email subject line gets more clicks?
  • Which product version performs better?

A/B testing is a statistical framework used to answer these questions objectively.


What Is A/B Testing?

A/B testing compares two versions of something:

  • Version A (control)
  • Version B (variant)

Users are randomly assigned to one of the two versions, and outcomes are measured.

The goal is to determine whether the observed difference is statistically significant or just due to chance.


Why Use Proportions?

In many A/B tests, outcomes are binary:

  • Click / No Click
  • Buy / Not Buy
  • Sign Up / Not Sign Up

Such outcomes are naturally analyzed using proportions.


Real-World Scenario

An e-commerce company tests two checkout button designs.

  • Version A: Old design
  • Version B: New design

They measure the proportion of users who complete a purchase.


Collected Data

Group Visitors Purchases Conversion Rate
A 1,000 120 0.12
B 1,000 150 0.15

Defining the Hypotheses

Hypothesis Meaning
H₀ Conversion rates are equal
H₁ Conversion rates are different

This is a two-proportion hypothesis test.


Key Idea Behind the Test

We compare the difference between sample proportions to what we would expect if there were no real difference.

If the observed difference is too large to be explained by chance, we reject the null hypothesis.


Statistical Test Used

The standard test for A/B testing with proportions is:

Two-Proportion Z-Test

This test assumes:

  • Large sample sizes
  • Independent observations
  • Binary outcomes

Interpreting the Result

Suppose the test returns:

  • p-value = 0.03
  • α = 0.05

Since p-value < α:

We reject the null hypothesis.

There is evidence that Version B performs better than Version A.


Business Interpretation

Statistical significance answers:

“Is the difference real?”

Business teams must also ask:

  • Is the improvement practically meaningful?
  • Does it justify implementation cost?

Statistics informs decisions — it does not replace judgment.


Common Pitfalls in A/B Testing

  • Stopping the test too early
  • Running multiple tests without correction
  • Ignoring random assignment
  • Confusing correlation with causation

Quick Check

Why are proportions used in many A/B tests?


Practice Quiz

Question 1:
What statistical test is commonly used for A/B testing with proportions?


Question 2:
Does statistical significance guarantee business success?


Question 3:
Why is random assignment important?


Mini Practice

A marketing team tests two email subject lines.

  • Email A open rate: 18%
  • Email B open rate: 22%

What statistical question should be answered before choosing Email B?


What’s Next

In the next lesson, we will work on a second mini project: Customer Analytics with Regression, bringing together modeling and interpretation.