Conversion rate optimization is a challenge that most companies face nowadays. Pretty much every company has some kind of conversion they hope to increase. One widely used method to achieve this is A/B testing – which most of us simply call AB testing.
AB testing means comparing multiple versions of something to figure out which performs better. One of the most challenging aspects of AB testing is interpreting the results to make decisions. That's where some statistics engines come in.
These engines use different approaches to interpret the results and yield meaningful answers to questions such as "what is the best variant?", "how much uplift does it represent?", etc.
There are two popular approaches to interpreting AB testing results: the frequentist and the Bayesian inference. This series of posts presents the difference between them and gives a broad overview of the topic.
The first post highlights the difference between the Bayesian and the frequentist approaches for AB testing analysis. The second post brings an in-depth overview of the Bayesian approach, our choice for AB testing analysis. Last but not least, the third post explains how Croct's experiment engine works.
- Bayesian or frequentist: What approach is better for AB testing?
- A deep overview of the Bayesian approach for AB testing
- How Croct's AB testing engine works
Have a nice read!