From 49e6323a1a231bbeaa58ad341ca52f33b8d204b7 Mon Sep 17 00:00:00 2001 From: Thomas Heartman Date: Fri, 10 Dec 2021 13:53:49 +0100 Subject: [PATCH] Apply suggestions from code review Co-authored-by: sighphyre --- website/docs/concepts/a-b-testing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/docs/concepts/a-b-testing.md b/website/docs/concepts/a-b-testing.md index 25e02b8831..1b893d3863 100644 --- a/website/docs/concepts/a-b-testing.md +++ b/website/docs/concepts/a-b-testing.md @@ -33,7 +33,7 @@ The simplest A/B experiments use a control group and a single treatment group, b ### Potential pitfalls -A thing to keep in mind when running experiments like this or in other cases where you're optimizing for a single metric is whether this is damaging to certain other metrics. Does more sign-ups also lead to more people (relatively) cancelling their membership? Does it decrease engagement with other parts of your product? +A thing to keep in mind when running experiments like this or in other cases where you're optimizing for a single metric is whether this is damaging to certain other metrics. Do more sign-ups also lead to more people (relatively) cancelling their membership? Does it decrease engagement with other parts of your product? Don't do yourself a disservice by chasing one metric above all else. Keep an eye on other metrics at the same time and see if they are affected — always maintain a holistic view of things.