[Support Guide] Making sure your builds use appropriate resources for Netlify's build system

Last reviewed by Netlify Support: April 2024

Hopefully, your builds at Netlify “just work”, once you have things configured. That’s the intention and the common case, but for those times when your reality falls a bit short of our dream, this article talks in detail about some of the limitations that customers run into while building and how to work around them.

The operation of Netlify’s build system is pretty well documented in several places such as:

Today, we’ll talk about why this is a case where size does matter!

Memory Usage

Let’s start with the size (memory usage) of your build processes. The reason they matter is that our default build environment is not oversized: On the Starter plan, we guarantee you 8 GiB of memory and 4 CPUs. You can count on that for every build, and you should design your build process to use those benchmarks as a maximum. On Paid plans (Pro, Business, and Enterprise without High-Performance Builds) you get up to 11 GiB of memory and 6 CPUs. Finally, Enterprise customers using High-Performance builds get up to 36 GiB of memory and up to 11 CPUs.

How can you tell how much memory is used? One way is by using the “local build” process and watching how much memory the CLI process and subprocesses use via your local memory profiler of choice (mine is top(1)) - that will be fairly close to the same thing that we are running on our hardware.

This article from Gatsby has some very practical advice on profiling your memory usage (applicable to anyone using Node.js during their build - not just Gatsby users!!) and also, reducing it.

If you try to use more memory, there are a few potential results, but it’s likely that your build process will abort mid-build, perhaps in the same place each time, or perhaps in slightly different places, depending on how you use memory.

  • This might look like a silent failure (logs cut off mid build; build is marked as failure)
  • Or perhaps we tell you it got killed: 4:15:11 PM: /usr/local/bin/build: line 34: 1208 Killed hugo --log --templateMetrics
  • But most often looks like a failure in some component of the build, such as:
    1:00:42 AM: [1676:0x3b4d0a0]   122458 ms: Scavenge 1325.8 (1422.8) -> 1324.9 (1423.3) MB, 36.8 / 0.0 ms  (average mu = 0.183, current mu = 0.043) allocation failure
    1:00:42 AM: [1676:0x3b4d0a0]   122467 ms: Scavenge 1326.0 (1423.3) -> 1325.1 (1423.8) MB, 6.1 / 0.0 ms  (average mu = 0.183, current mu = 0.043) allocation failure
    1:00:42 AM: [1676:0x3b4d0a0]   122476 ms: Scavenge 1326.2 (1423.8) -> 1325.3 (1424.3) MB, 6.3 / 0.0 ms  (average mu = 0.183, current mu = 0.043) allocation failure
    1:00:42 AM: <--- JS stacktrace --->
    1:00:42 AM: ==== JS stack trace =========================================
    1:00:42 AM:     0: ExitFrame [pc: 0x13802305be1d]
    1:00:42 AM: Security context: 0x2df5db21e6e9 <JSObject>
    1:00:42 AM:     1: SourceMapConsumer_allGeneratedPositionsFor [0x385e65beda99] [/opt/build/repo/node_modules/source-map/lib/source-map-consumer.js:~178] [pc=0x138024877ed8](this=0x208e15402291 <BasicSourceMapConsumer map = 0x3578d190d781>,aArgs=0x24d0c7c0cf19 <Object map = 0x22b423e33b09>)
    1:00:42 AM:     2: /* anonymous */(aka /* anonymous */) [0x28e39f0f6b99] [/opt/build/repo/nod...
    1:00:42 AM: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

This article has a lot more details about how to try to work around this memory allocation issue. Hopefully it helps you, though it is not a magic bullet / one-size-fits-all “solution”:

Lots of files?

There is another frequent failure mode around deploying a successful build. Your script runs and completes successfully. We begin to upload…and never finish:

4:50:11 PM: Build script success
4:50:11 PM: Starting to deploy site from 'public/'
4:50:17 PM: Creating deploy tree asynchronously
4:54:39 PM: 61049 new files to upload

What’s happening here? We have to upload your changed files (please check out this article about what “changed” means, and how to reduce the number of subsequent builds) via our API, and if there are tens of thousands of them, it is quite likely that you won’t be able to complete that in the allocated time (which is 15 to 30 minutes for the whole build + deploy process). Aside from slowing down or potentially blocking your deploys, it’s also a bad experience for your return visitors - they cannot use their local browser cache from last visit for files whose checksum or name has changed in a more recent deploy. I don’t think you really changed 61,000 files in that last deploy with intention - I think you changed a single filename that is included in all files, effectively changing them as well. So…don’t do that :).

If you have a lot of changed files but MAYBE not too many, you may see an error like this happen within your normal build time limit:

failed during stage 'deploying site': Failed to execute deploy: Error: deploy timed out while waiting to enter states: prepared,ready

In that case, it’s worth posting a link to your deploy in this thread, to see if we can help you extend that time limit (which is by default only 5 minutes to “wrap things up” after the build completes), since this may help allow the files to be sent within that default time window I mentioned earlier. Please note that the solution for the above error message is only applicable for the error message quoted just above, not builds in general!

1 Like