Support Forums

[Support Guide] Making sure your builds use appropriate resources for Netlify's build system

updated in November 2021
Hopefully, your builds “just work”, once you have things configured. That’s the intention, but the reality can fall a bit short, so this article talks in detail about some of the limitations that customers run into while building.

The operation of Netlify’s build system is pretty well documented in several places such as:

Today, we’ll talk about why this is a case where size does matter!

Memory Usage

Let’s start with the size (memory usage) of your build processes. The reason they matter is that our default build environment is small: we guarantee you only 3GByte of memory and 1 CPU. You can count on that for every build, and you should design your build process to use those benchmarks as a maximum.

How can you tell how much memory is used? One way is by using the “local build” process and watching how much memory the docker container itself uses via your local memory profiler of choice (mine is top(1)) - that will be literally the same thing that we are running on our hardware. You could also limit that container via docker settings to 3Gb to simulate our environment more precisely as a better “test” and less of a monitor. We can’t provide support on how you use docker, but I’m confident you can figure it out :slight_smile:

If you try to use more memory, there are a few potential results:

  1. Your build process will likely abort.

    • This might look like a silent failure (logs cut off mid build; build is marked as failure)
    • Or perhaps we tell you it got killed: 4:15:11 PM: /usr/local/bin/build: line 34: 1208 Killed hugo --log --templateMetrics
    • But most often looks like a failure in some component of the build, such as:

    1:00:42 AM: [1676:0x3b4d0a0] 122458 ms: Scavenge 1325.8 (1422.8) → 1324.9 (1423.3) MB, 36.8 / 0.0 ms (average mu = 0.183, current mu = 0.043) allocation failure
    1:00:42 AM: [1676:0x3b4d0a0] 122467 ms: Scavenge 1326.0 (1423.3) → 1325.1 (1423.8) MB, 6.1 / 0.0 ms (average mu = 0.183, current mu = 0.043) allocation failure
    1:00:42 AM: [1676:0x3b4d0a0] 122476 ms: Scavenge 1326.2 (1423.8) → 1325.3 (1424.3) MB, 6.3 / 0.0 ms (average mu = 0.183, current mu = 0.043) allocation failure
    1:00:42 AM: <— JS stacktrace —>
    1:00:42 AM: ==== JS stack trace =========================================
    1:00:42 AM: 0: ExitFrame [pc: 0x13802305be1d]
    1:00:42 AM: Security context: 0x2df5db21e6e9
    1:00:42 AM: 1: SourceMapConsumer_allGeneratedPositionsFor [0x385e65beda99] [/opt/build/repo/node_modules/source-map/lib/source-map-consumer.js:~178] [pc=0x138024877ed8](this=0x208e15402291 ,aArgs=0x24d0c7c0cf19 )
    1:00:42 AM: 2: /* anonymous /(aka / anonymous */) [0x28e39f0f6b99] [/opt/build/repo/nod…
    1:00:42 AM: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

  2. Rarely, you might get lucky and it will Just Work. If it doesn’t abort - congrats! Sometimes good things happen, but sticking to our benchmarks is the best way to guarantee things always work as intended.

Lots of files?

There is another frequent failure mode around deploying a successful build. Your script runs and completes successfully. We begin to upload…and never finish:

4:50:11 PM: Build script success
4:50:11 PM: Starting to deploy site from 'public/'
4:50:17 PM: Creating deploy tree asynchronously
4:54:39 PM: 61049 new files to upload

What’s happening here? We have to upload your changed files (please check out this article about what “changed” means, and how to reduce the number of subsequent builds) via our API, and if there are tens of thousands of them, it is quite likely that you won’t be able to complete that in the allocated time (which is 15 to 30 minutes for the whole build + deploy process). Aside from slowing down or potentially blocking your deploys, it’s also a bad experience for your return visitors - they cannot use their local cache from last visit for changed files. I don’t think you really changed 61,000 files in that last deploy with intention - I think you changed a single filename that is included in all files, effectively changing them as well. So, don’t do that :slight_smile:

If you have a lot of changed files but MAYBE not too many, you may see an error like this happen within your normal build timelimit:

failed during stage 'deploying site': Failed to execute deploy: Error: deploy timed out while waiting to enter states: prepared,ready

…then it’s worth posting a link to your deploy in this thread, to see if we can help you extend that time limit (which is 5 minutes after build) to help allow the files to be sent within that default time window I mentioned earlier.

1 Like