[Support Guide] Testing your Netlify builds

Running tests as part continuous deployment processes is a fantastic way to ensure that your code is indeed production ready and not prone to errors you can anticipate. Fortunately, Netlify’s build system and Deploy Previews can help make testing your application easier!

I think most testing can fit into three categories:

  • unit testing - most important during a cycle of active development, but can also be run at build time. This is more often than not a run-your-code-and-look-at-results test, rather than a browser test. Did you know that you can run shell scripts and other multi-process pipelines in our build environment ? The pattern for aborting the build if a test fails in this case is to make sure your build command exits with a non-0 shell command status in case any important test fails. We’d be happy to advise you on this in case you’re not a Unix whiz - it’s not super intuitive.

  • integration testing - making sure that all the pieces work together well. This could be a pass/fail test of some functionality via internal methods, or a browser-based test using a headless browser . In both cases, you would still want to not publish the build if the tests fail, so the same pattern as above around ensuring that your test process returns the correct status to the parent shell is crucial in us NOT publishing a build your code could identify as being broken!

  • acceptance or end-to-end testing - making sure the site works once published. This is impossible to test during build - you can’t point a browser at a real webpage served on our CDN before it is built and deployed. Here’s where you should be leveraging Deploy Previews! When you think your feature branch is ready to ship, you create a Pull Request to your production branch with the code change. That might lead to problems that your unit or integration testing didn’t see, and so you might want to point a tool like Browserstack or some other browser-based testing at your site to “really” test. This will let you test redirects, proxying, SSL, or even visual indicators that would be hard to see in headless chrome. This is also a good time to get your stakeholders to review the work - user acceptance testing - BEFORE merging the PR.

So, at a high level, you have a plan - but how will you implement it? Here’s some specific example implementations:

Unit Testing : you could use an npm package like the ones described here, and chain your tests with your build using a build command like:

npm run test && npm run build

When you use this pattern, the build command may be poised to succeed, but as long as the test process returns a non-0 (unsuccessful) exit code to the parent shell, we’ll mark the build as failed before we even try to build. You’ll want to make sure your test tool logs some details when it does this so you can examine the build logs and understand the test(s) that failed!

Integration testing:

If you’re using a headless browser, try this approach:

  • Build your code normally, running unit tests first. If the unit tests pass & the build process is successful…
  • Then fire up a server process in our build environment IN THE BACKGROUND so you can run other processes while it is running. This is the ONLY time we’ll advise you to run a server during build!
  • Note that you’ll have to run your server on a non-privileged port (numbered > 1024) since you do not have root access in the container to use a privileged port below 1024 (such as 80 or 443)
  • Make sure your build uses very little memory . This will probably be the hardest part, but chrome may very well not run within the build container depending on what else is happening on that host. (For reference, your builds happen in a docker container which has at minimum 1 cpu and 1.5GB memory). A good way to do this might be to do it OUTSIDE of yarn/gatsby (so those aren’t “in memory” still while you try to load chrome) - with a build command like this perhaps: gatsby build && ./run_my_test
  • You’ll need to clean up the server process after testing - if you leave it running, your build will stall and fail.
  • We recommend you configure and test this using these instructions in our docs to simulate our build environment, for a quicker check/fix cycle (you can even run repeatedly within the same container, without needing to trigger a new build, as you debug.)

Acceptance testing:

  1. build but do not publish your deploy. You can use our Locked Deploys feature to accomplish this, and of course Deploy Previews are NEVER automatically published at your production URL
  2. use that deploy preview’s specific URL to do your testing from an external service. In case you weren’t aware, EVERY deploy has a permalink in the form https://HASH--sitename.netlify.com that you’ll see in your deploy notifications or you can get from the API or your deploys listing by clicking the timestamp, as shown in the below screenshot
  3. then you could use the API to publish a deploy whose tests have passed. This will remove most/all of the potential problems I’m going to mention below

Pro Tips for unit and integration testing:

  1. If you have a public repository you can make your build logs visible to anyone even if they are not part of your Netlify team.
  2. You can have our notifications using GitHub checks or similar functionality for GitLab post links to the failed deploy logs right in your PR’s comments.
  3. You’ll have to install your tools - we don’t have rspec or middleman installed by default. See this article on configuring builds for more information on creating that configuration in a way that our CI uses it.

Finally, there is a new pattern in town that may be relevant to you AND save you some work! With the launch of Netlify’s build plugins, folks have automated some common tasks you might like to run with any deploy such as verifying links or monitoring site performance.

Let us know how your testing goes, or whether you have other ideas or approaches!