Source: netflixtechblog.com

Building confidence in a decision

Category: Business, Data

https://netflixtechblog.medium.com/?source=post_page-----8705834e6fd8-----------------------------------https://www.linkedin.com/in/martintingley/ with https://www.linkedin.com/in/wenjing-zheng/, https://www.linkedin.com/in/simon-ejdemyr-22b920123/, https://www.linkedin.com/in/stephanielane1/, https://www.linkedin.com/in/michaelslindon/, and https://www.linkedin.com/in/mcfrl/ This is the fifth post in a multi-part series on how Netflix uses A/B tests to inform decisions and continuously innovate on our products.

The unpleasant reality about A/B testing is that no test result is a certain reflection of the underlying truth.

As discussed in https://netflixtechblog.com/interpreting-a-b-test-results-false-positives-and-statistical-significance-c1522d0db27a (False positives and statistical significance), by convention we run experiments at a 5% false positive rate.

Another challenge in interpreting A/B test results is rationalizing through the movements of multiple metrics (primary decision metric and secondary metrics).

Related Articles