Split testing Under the Hood?

I was wondering how netlify’s split testing works under the hood?
Its using Git version control with different branches, but how is it running the branchs on the server ?

Is it using load balancing or something else ?

Hi @JoshuaWalker we don’t have a dedicated area for split testing, as it doesn’t come up that often, but i am moving your question to #deploying-building for now :slight_smile:

At a high level, we have X copies of your site (1 per branch in the split test) served from our CDN. Choice as to which version to serve, as well as the mapping of “been here before, should serve the same version” happens at the CDN edge, at browse time, without any “round trip” to the backing store to decide which version to serve.

How we choose which version to serve is:

  • for new visitors (which means “no nf_ab cookie is set for this site” - so for instance curl is always a new visitor unless you set the cookie to a valid value explicitly), we use the percentages you’ve configured to offer a statistically random version of the site. As we serve them a version, we set that nf_ab cookie to a real number between 0 and 1 to indicate what the random selection was, and it maps to those percentages in the test (so a value of .1 would map to the 0-10% portion of the test)
  • for repeat visitors, we see the nf_ab cookie in the HTTP request, and serve them the appropriate version of the site (which will be the one they saw before, unless you’ve changed your settings in the meantime).

There are additionally some major caveats I wanted to call out:

  • if you proxy to us, this will typically break the branch affinity and will lead to trouble. More details in this article
  • if you make API calls via methods like XmlHTTPRequest, or from 3rd party services, make sure you forward and respect the nf_ab cookie when you do this - if a request arrives here without it set, we’ll pick a branch as described above - even if it was an API call for a specific branch test that the user is already viewing, we can’t tell that unless the cookie is set at REQUEST time for EVERY request.

Not sure if that’s what you were looking for or not, but happy to go deeper into whatever follow-up questions you might have.

1 Like

Thats really helpfully, thank you.

Currently you can only run 4 different branches? Allowing you to run two different tests at the same time. 2^2=4

There is only ever one test on a site at a time, using the number of branches you’ve specified. The test goes across every branch in the percentages specified. Our UI limits you to 4 because things do get a bit confusing to display and slice/dice the percentages with the sliders beyond that, but if you needed e.g. 6 for some reason, you could ping us here and we could guide you through using our API to set more branches up.

Another cool trick, which I didn’t mention before, is that you can set up a 0/100% split test. Why would that make any sense? You’d have the 0% branch be a beta, and you can explicitly set an nf_ab cookie to 0 to allow people to use your beta without “accidentally” assigning it to anyone who wasn’t an intended tester.

I understand what you mean.

Sorry I should have explained,
To test 2 changes on the website it would be ideal to run both changes simultaneously to check for conflicts between the tests and to speed up the a b testing process, providing you get enough visitors to get to statistical significance.

I can achieve this by creating another git branch and merging the other tests in, to create each variant.

Eg-
Branch one - No Tests
Branch two - Test A
Branch three - Test B
Branch four - Test A & B

I would be really interested in the api because to run 3 experiments you would need to run 8 variants/branchs.

Sure, that workflow makes sense. I don’t have specific API instructions though I could dig some up if you can’t figure it out using the workflow described here:

If I recall from the last time I worked on this about a year ago, it’s something like:

(first, deploy all branches you’ll use, then):

  • make GET call to get the id of the split test for the site.
  • use that ID to adjust settings in a separate call

Hi ya’ll, haven’t tried split testing myself so far so don’t know exactly only this post’s video suggests that the nf_ab cookie contains the branch name as value instead of numbers?! The post is pretty out dated though… :thinking:

Maybe this could be added to the docs as it’s a pretty cool feature :star_struck:

If you do set the nf_ab cookie to a branch name that is part of your split test - even if that branch is set to 0% in the split test - folks will reliably be served that branch. It’s not a typical use case but it can help “hardwire” a request to a branch for testing (we use it for opt-in betas:

  1. set up a 0/100% test beta/production
  2. have visitors click a button on a page on that site to set the cookie to a specific beta branch
    3 …that is only served to people with the opt-in cookie set.
  3. optionally have that page show a button to opt back out by unsetting the nf_ab cookie.

Hey Chris, After I stop my split test will all the users have the nf_ab cookie removed?

Hi @FleetOps_Engineer,

The cookie would not be automatically removed as there’s no way for a server to remove a cookie (I suppose). But once the cookie reaches its expiration, it won’t be sent with requests anymore.

1 Like

I asked about removing the cookies because I want to make sure that users that were in a previous stopped split test start fresh in a new split test with a new nf_ab cookie value. What do you guys recommend?

I have a couple of ideas in mind like:

  • set a specific end date to the cookie expiration date to force it being invalidated when we expect the experiment to be shutdown. I tried this out but Netlify always changes my expiration date to the one that was previously set from Netlify
  • set the nf_ab cookie to nf_ab=;expires=Thu, 01 Jan 1970 00:00:01 GMT; In this case I noticed that when I have the experiment running and do this sometimes my app crashes after reload any thoughts on that? Mixed content issue?

Thanks

@fool can you help me out with the issue above?

How are you doing it? Using JavaScript?

This is not recommended. With Split Testing active, Netlify expects to have a value with the cookie and since the cookie is sent, Netlify tries to read from that value (which is an empty string in this case) and thus it’s causing issues.

I think removing cookies with JavaScript should do the trick, doesn’t it work?

Hi @hrishikesh

  1. I’m setting the expiry date like this:
function setCookieExpiration(
  cookieName: string,
  cookieValue: string,
  endDate: string,
  path: string = '/',
) {
  const utcTime = new Date(`${endDate}T00:01:00-05:00`).toUTCString();
  const expiresInTime = `expires=${utcTime};`;
  document.cookie = `${cookieName}=${cookieValue};${expiresInTime};path=${path}`;
}
const val = getCookieValue('nf_ab'); // this gets the cookie value like 0.123435
setCookieExpiration('nf_ab', val, '2022-01-15');
  1. About removing the cookies by setting to 01 Jan 1970 00:00:01 GMT;, and I’m using Javascript and the recommended way of opting out you guys suggest over here Netlify pro tip: Using Split Testing to power private beta releases .

Hi, @FleetOps_Engineer. I would like to revisit an assumption, namely this:

  • The nf_ab cookie from a previous split test will invalidate a new split test.

In other words: “Is there any problem with reusing the nf_ab cookie from a previous split test?”

The answer to that question is simply: “No.”

The explanation for this is that the nf_ab cookie has only two important properties:

  • It must remain consistent for a single browser session.
  • It must be a random value between 0 and 1.

As we randomly generate the value and because browsers will automatically resend the cookie, both conditions are true.

The purpose of the random values is so that all site visitors are sent to the split tests with the right percentages. It doesn’t send you to specific test, it just lists which percentage was assigned to you. If the split test percentages change, you will be automatically redirected to the correct new branch automatically. Deleting the cookie and recreating it is not required.

For example, let’s say we send the following cookie values, each one to a different person:

nf_ab=0.992722
nf_ab=0.160720
nf_ab=0.057449

When you make the split test, let’s say you have a split test which is is 90% to main, 8% to branch_A, and 2% to branch_B.

The first two cookies above would go to main and the last cooke would go to branch_A. (The cookie would need to be less than 0.02 to go to branch_B and none of the cookie above do.)

If you stopped the split test, the cookie will just be ignored. However, if you make a new split test with, 75% to main and 25% to branch_C then the cookies above would send the first cookie to main and the second two cookies to branch_C.

The point is, the cookie doesn’t control which split test you go to directly. The cookie control which “percentage” has been chosen for this browser session. It does not direct to an individual split test. The cookie “keeps the percentages correct”. That is all it does.

Because the randomness of the chosen value is unchanged, the distribution will be unchanged.

Note, this even distribution is not clear with just three cookies. It would be much clearer if I could use an example with one hundred cookies. The nf_ab cookies will be evenly distributed between 0 and 1 (between 0% and 100%). This means that even when you change test, the traffic for users will be balanced correctly.

If I had one hundred cookies 0.00, 0.02, 0.03 … 0.99, for a 50%/50% split test the percentage of site visitors to each test would be 50% and 50%. If the split test change to 25%/75%, the same cookies still give 25% and 75%. The cookies keep the percentages correct - they don’t route to an individual test.

To summarize, you don’t need to remove the cookies and I do not recommend doing so as there is no benefit. It is unnecessary.

However, if there are other questions about this, please let us know.

1 Like

I’m having the same issue despite no serverless functions. No active test as it was an urgent issue, but will be starting one next week to monitor. Ticket number 163145

Hey @WilNichols,

I’ve responded to the ticket in the helpdesk.