[Support Guide] Handling code-splitting issues on Netlify

Understanding code-splitting:

When building SPAs (or any kind of an app that relies purely on client-side navigation), it’s common to have large bundle sizes depending on the size and nature of the application. However, these large bundle sizes lead to poor user experiences as the users needs to wait for the entire bundle the load before the app becomes interactive. The worse part is that, your users might not even access each and every code-path, which means, a big part of your bundle might not even be used by the user. To solve this problem comes code-splitting.

Quoting MDN:

Code splitting is the practice of splitting the code a web application depends on — including its own code and any third-party dependencies — into separate bundles that can be loaded independently of each other. This allows an application to load only the code it actually needs at a given point in time, and load other bundles on demand. This approach is used to improve application performance, especially on initial load.

This solves the long load time issue, but here comes the problem causing this Support Guide to exist: the dreaded chunk-load errors.

Understanding chunk-load errors:

Once your code is split into multiple different files, it’s common for your build tools to generate hashed files names like:

  • main.12345.js
  • chunk.page-foo.37168.js

and so on…

For traditional web servers, this was very important. They were not the pefect at caching and cache busting your hashed files names was one way to ensure your users always got the latest bundle. However, on modern servers like Netlify, these hashes are not a requirement as we handle caching much more effectively, but these hashes continue to exist causing other problems.

Netlify uses atomic deploys which means there’s no state where you would be having an old and a new deploy live at the same time. This means that, if your file hashes change, the old files would now return a 404 on the site’s production domain. This is expected as the files no longer exist at that path, but this causes the assets to fail if you deploy an update right when your client was accessing the application and doesn’t reload the page. Consider this situation:

  • You deploy your app
  • The assets are in the format: /main.12345.js
  • Your user is accessing your website
  • You deploy a change
  • The assets are now named /main.371892.js
  • Since your app users client-side navigation, the user doesn’t get this latest reference and tries to load /main.12345.js
  • This file doesn’t exist in your latest deploy
  • This gives a chunk-load error to your user
  • Your application crashes

This is the problem we’re trying to solve in this guide. Here are some potential solutions:

Solution 1 - Disable file hashing:

Probably the easiest way to deal with this issue is to disable hashes in the file names. This would ensure that all of your files always get predicatable names like /main.js. The exact way to disable this might be different in each framework (and some might not even have an option), but here’s a potential example of how this can be achieved using Vite:

import {defineConfig} from 'vite'
export default defineConfig({
  build: {
    rollupOptions: {
      output: {
        assetFileNames: 'assets/[name].[ext]',
        chunkFileNames: 'chunks/[name].js',
        entryFileNames: 'entries/[name].js'
      }
    }
  }
})

Caveats:

  • Depending on how much your files have changed, this could still break your application. For example, if your current DOM state does not match what these new chunks expect the DOM state to be, you’d basically be breaking your application by this approach.

Solution 2 - Use permalinks:

When you publish a deploy, Netlify updates the following URLs:

  • Your custom domain (if any)
  • Your netlify.app subdomain

But each deploy has its own permalink that doesn’t change[1] no matter how many deploys you perform after that. This permalink is in the format:

https://<deploy-id>--<site-id>.netlify.app/path-to-asset.ext

<site-id> can also be replaced with <site-subdomain>

This link will be live as long as the deploy remains live (90 days by default). You can configure your build tool or framework to publish your assets at this URL instead. For example, Vite can achieve that using:

import {defineConfig} from 'vite'
import {env} from 'node:process'
export default defineConfig({
  base: env['DEPLOY_URL'] || '/'
})

This would ensure that the users that are running a specific build will continue to use that same build until they refresh manually.

Caveats:

  • Browser might block these assets due to CORS. This can be solved by adding the CORS headers for your assets as described here: [Support Guide] Handling CORS on Netlify (section 3.2.1).
  • Assets served on the netlify.app subdomain do not use High Performance Edge Network. So customers using that would not be able to utilise its benefits for those URLs.

Solution 3 - Disable automatic publishing:

The default behaviour of Netlify is to publish your build on the production branch to the production URL as soon as its done. However, this can be disabled: Manage deploys | Netlify Docs. This would allow you to build multiple times while publishing the builds live only when minimum damage would occur to your users.

Caveats:

  • Manual intervention would be required.
  • Hard to estimate when the minimum damage would be without having enough data.

Solution 4 - Use Build Plugins:

Netlify allows you to create build plugins that provide additional functionality. One of this functionality includes working with Netlify’s Build Cache: Create Build Plugins | Netlify Docs. You can create a plugin to store the currently built files to the build cache. Then you’d also have to write the code to restore this build cache on the next deploy, compare all the files you wish to keep in this deploy and save them to the publish directory.

Example code would be something like:

export const onBuild = async meta => {
  const files = await meta.utils.cache.list({
    depth: 100
  })
  const filesToRestore = files.map(file => {
    return // handle comparision here
  })
  await meta.utils.cache.restore(filesToRestore)
  // similarly write the required code for saving files to cache
  // you can save all files, or conditionally add a specific set of files
  // to save, call `await meta.utils.cache.save('path')` or pass an array of paths
}

Caveats:

  • Needs some engineering efforts and testing.
  • Build Cache is deleted approximately after a month of no builds for the same context, so you might not be able to use the cache if you don’t deploy frequently.
  • You’d have to configure the comparision logic correctly or else you might end up keepinng age-old files in your deploy. This might or might not be expected.

Solution 5 - Gracefully handle chunk-load errors:

Another option would be to accept the fact that these errors could happen and instead of showing a broken application to the users, implement some error handling in the main chunk of your application. This error handling can be triggered when a chunk fails to load and you can then display a UI saying that your site failed to load, please refresh. You could also show this in a creative way that might not upset users (as much). Here’s an example of how other users achieved this using Vite: vite load chunk error after new build (Failed to fetch dynamic imported modules) · Issue #247 · hannoeru/vite-plugin-pages · GitHub

Caveats:

  • Showing your users that your application broke down might not be desirable to people and can reduce user experience.

Solution 6 - Implement a service worker:

This is a relatively complex solution than the ones listed above but could work for some people. You can use a service worker to cache your files and continue serving those. Within the service worker, you could implement a logic to periodically check the server for updates and ask the client to refresh the website once updates are detected. Here’s an example code:

addEventListener('message', async event => {
  if (event.data === 'handshake') {
    const buildInfoRes = await fetch('/build.json')
    const buildInfo = await buildInfoRes.json()
    if (buildInfo.version !== 1) {
      event.source.postMessage('update')
    }
  }
})

The above code is activated whenever the client sends a message with the content 'handshake' to the service worker. The service worker then fetches the build.json file (you can make a periodic check instead of one-time) from the server which contains some data like (you’d have to ensure to add some data that suits your use-case):

{
  "time": 1708926682,
  "version": 2
}

Whenever the version property is detected to not be the same as the one defined in the service worker, the worker will send a message to the client with the text 'update'. The client would receive this message and trigger some UI feedback to ask the users to refresh. Here’s the client-side implementation:

if ('serviceWorker' in navigator) {
  await navigator.serviceWorker.register('/sw.js', {
    scope: '/'
  })
  navigator.serviceWorker.addEventListener('message', event => {
    if (event.data === 'update') {
      // show the UI prompt to reload page
      if (confirm('Needs refresh, reload?')) {
        location.reload()
      }
    }
  })
  navigator.serviceWorker.ready.then(registration => {
    registration.active.postMessage('handshake')
  })
}

In the above example, we’re using the confirm API of browsers, but you could show a toast or a dialog of your own choice.

This solution if the most flexible as it allows you to configure how exactly you wish to handle this scenario. Possible alternatives:

  • Cache all files and keep serving those. Then, when the user closes the tab, remove all the files from the cache, fetch the new data and cache it for future use.
  • Selectively update each file in the cache.
  • Something else.

Caveats:

We understand that none of this might be a perfect solution. So if you’re having troubles implementing these solutions, or ran into another caveat or have some more solutions to share, please don’t hesitate to reach out on a new thread and we’d be happy to help.


  1. There might be changes to the underlying infrastructure such as that on the CDN-level or sometimes changes within the infrstructure of some upstream providers which can cause some differences. ↩︎

2 Likes