Support Forums

Split testing Under the Hood?

I was wondering how netlify’s split testing works under the hood?
Its using Git version control with different branches, but how is it running the branchs on the server ?

Is it using load balancing or something else ?

Hi @JoshuaWalker we don’t have a dedicated area for split testing, as it doesn’t come up that often, but i am moving your question to #deploying-building for now :slight_smile:

At a high level, we have X copies of your site (1 per branch in the split test) served from our CDN. Choice as to which version to serve, as well as the mapping of “been here before, should serve the same version” happens at the CDN edge, at browse time, without any “round trip” to the backing store to decide which version to serve.

How we choose which version to serve is:

  • for new visitors (which means “no nf_ab cookie is set for this site” - so for instance curl is always a new visitor unless you set the cookie to a valid value explicitly), we use the percentages you’ve configured to offer a statistically random version of the site. As we serve them a version, we set that nf_ab cookie to a real number between 0 and 1 to indicate what the random selection was, and it maps to those percentages in the test (so a value of .1 would map to the 0-10% portion of the test)
  • for repeat visitors, we see the nf_ab cookie in the HTTP request, and serve them the appropriate version of the site (which will be the one they saw before, unless you’ve changed your settings in the meantime).

There are additionally some major caveats I wanted to call out:

  • if you proxy to us, this will typically break the branch affinity and will lead to trouble. More details in this article
  • if you make API calls via methods like XmlHTTPRequest, or from 3rd party services, make sure you forward and respect the nf_ab cookie when you do this - if a request arrives here without it set, we’ll pick a branch as described above - even if it was an API call for a specific branch test that the user is already viewing, we can’t tell that unless the cookie is set at REQUEST time for EVERY request.

Not sure if that’s what you were looking for or not, but happy to go deeper into whatever follow-up questions you might have.

1 Like

Thats really helpfully, thank you.

Currently you can only run 4 different branches? Allowing you to run two different tests at the same time. 2^2=4

There is only ever one test on a site at a time, using the number of branches you’ve specified. The test goes across every branch in the percentages specified. Our UI limits you to 4 because things do get a bit confusing to display and slice/dice the percentages with the sliders beyond that, but if you needed e.g. 6 for some reason, you could ping us here and we could guide you through using our API to set more branches up.

Another cool trick, which I didn’t mention before, is that you can set up a 0/100% split test. Why would that make any sense? You’d have the 0% branch be a beta, and you can explicitly set an nf_ab cookie to 0 to allow people to use your beta without “accidentally” assigning it to anyone who wasn’t an intended tester.

I understand what you mean.

Sorry I should have explained,
To test 2 changes on the website it would be ideal to run both changes simultaneously to check for conflicts between the tests and to speed up the a b testing process, providing you get enough visitors to get to statistical significance.

I can achieve this by creating another git branch and merging the other tests in, to create each variant.

Branch one - No Tests
Branch two - Test A
Branch three - Test B
Branch four - Test A & B

I would be really interested in the api because to run 3 experiments you would need to run 8 variants/branchs.

Sure, that workflow makes sense. I don’t have specific API instructions though I could dig some up if you can’t figure it out using the workflow described here:

If I recall from the last time I worked on this about a year ago, it’s something like:

(first, deploy all branches you’ll use, then):

  • make GET call to get the id of the split test for the site.
  • use that ID to adjust settings in a separate call

Hi ya’ll, haven’t tried split testing myself so far so don’t know exactly only this post’s video suggests that the nf_ab cookie contains the branch name as value instead of numbers?! The post is pretty out dated though… :thinking:

Maybe this could be added to the docs as it’s a pretty cool feature :star_struck:

If you do set the nf_ab cookie to a branch name that is part of your split test - even if that branch is set to 0% in the split test - folks will reliably be served that branch. It’s not a typical use case but it can help “hardwire” a request to a branch for testing (we use it for opt-in betas:

  1. set up a 0/100% test beta/production
  2. have visitors click a button on a page on that site to set the cookie to a specific beta branch
    3 …that is only served to people with the opt-in cookie set.
  3. optionally have that page show a button to opt back out by unsetting the nf_ab cookie.