Last updated by Netlify Support: October 2024
Are you struggling to get your build working? Most build errors are caused by problems in code or data, and for those errors, our best guide / general starting place for debugging build troubles is this one:
But, sometimes your build is working fine and suddenly ends with some inscrutable errors that don’t seem to be “about” your code or site even. Perhaps you see an error message like this in your builds?
2:28:35 PM: <--- Last few GCs --->
2:28:35 PM: [1541:0x3bb00a0] 154094 ms: Mark-sweep 1351.5 (1425.5) -> 1350.8 (1428.0) MB, 3449.9 / 0.0 ms (average mu = 0.076, current mu = 0.007) allocation failure scavenge might not succeed
2:28:35 PM: <--- JS stacktrace --->
2:28:35 PM: ==== JS stack trace =========================================
2:28:35 PM: 0: ExitFrame [pc: 0x114d680dbe1d]
2:28:35 PM: Security context: 0x369735b1e6e9 <JSObject>
2:28:35 PM: 1: addMappingWithCode [0x2a57bba6f771] [/opt/build/repo/node_modules/webpack-sources/node_modules/source-map/lib/source-node.js:~150] [pc=0x114d692be575](this=0x078de248d481 <JSGlobal Object>,mapping=0x3605ff1e57b9 <Object map = 0x314af4c97809>,code=0x2a12adc6a431 <String[9]: 9633]}}},>)
2:28:35 PM: 2: /* anonymous */ [0x3d024199afc9] [/opt/build/repo/node_modul...
2:28:35 PM: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
2:28:35 PM: 1: 0x8f9d10 node::Abort() [node]
2:28:35 PM: 2: 0x8f9d5c [node]
2:28:35 PM: 3: 0xaffd0e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
2:28:35 PM: 4: 0xafff44 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
2:28:35 PM: 5: 0xef4152 [node]
2:28:35 PM: 6: 0xef4258 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [node]
2:28:35 PM: 7: 0xf00332 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]
2:28:35 PM: 8: 0xf00c64 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
2:28:35 PM: 9: 0xf038d1 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [node]
2:28:35 PM: 10: 0xeccd54 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [node]
2:28:35 PM: 11: 0x116cede v8::internal::Runtime_AllocateInNewSpace(int, v8::internal::Object**, v8::internal::Isolate*) [node]
2:28:35 PM: 12: 0x114d680dbe1d
2:28:42 PM: /usr/local/bin/build: line 34: 1541 Aborted
or, perhaps, your build is chugging along and suddenly ends with a “Killed” log message:
10:14:05 AM: /opt/build-bin/build: line 77: 1351 Killed [... details about the build]
These are both signs that your build attempted to allocate more memory than is available in our build system. This article describes the details of what is allocated to each build, for your reference. So what can you do if you’re seeing something like the above?
- Improve your build performance.
We find that Gatsby is the site generator which primarily runs into this problem, though it is possible in any build pipeline. So if you do use Gatsby and see this, please start out by reading these two articles about improving Gatsby build performance:
and
Those articles have guidance on some pitfalls in commonly used Gatsby configs and suggestions for working around them.
-
Evaluate if you need to scale simultaneous builds.
Have a huge site and just need it to work? We do have a High Performance Builds, and we’ll be happy to connect you to our sales team for a discussion of pricing, which is custom for every use case and can scale to dozens of simultaneous builds. There’s a bit more detail on build environment size in the first paragraph of this section of this other guide -
Adjust how Node.js runs.
You can try to adjust how Node.js runs, specifically adjusting the stack size. Customers have had success with various different stack sizes, like the ones described in this stack overflow post. This may mean changing a build command likenpm run build
to ensure that its scripts, instead of running subprocesses likegatsby build
instead run something like:node --max-old-space-size=4096 gatsby build
, or alternatively (should have same result), set a$NODE_OPTIONS
environment variable to something like--max-old-space-size=4096
.You can try profiling your build to see what portion of it might overuse memory. One very relevant way to do this is using the instructions for running our build image locally and constraining the memory Docker has so you can figure out how much is needed and try to reduce. Within the docker image, you can use the workflow described in this article to actually do the profiling.
In case you run parallel workers to complete your build - consider not doing that, or constraining your CPU usage somehow. In our default build environment, there’s a limited number of CPU’s available (varies based on account, see this doc for more details - so running more parallel workers than that will just increase memory contention without increasing performance. You may find the guidance here helpful in case you use Gatsby: Resolving Out-of-Memory Issues | Gatsby
This advice won’t solve every problem but it has worked to solve most of the ones we’ve worked with. If this didn’t help you, please feel free to post a followup question describing the results of the experiments you attempted from the above list, so we can best advise you further.
Good luck!