I’m building a new app with Next 14 app router and Sanity and I was surprised to find relatively slow load times after deploying to Netlify. I have timed my Sanity fetch calls and I’m seeing times for those calls at around 1.5-2s on average in the deployed app, which is similar to what I get on my machine running the dev server, even after loading the same routes multiple times. I am surprised since I expected these calls to be cached. Running next build and next start to server the site from my own machine, those same fetches are pretty much instant (timings say around 5ms), so it does look like they are cached there. I have also found the same quick load times using netlify dev on my machine. I have not set any caching options for the Sanity fetch calls or changed any caching settings for Next, it’s all default. My Sanity client is configured like this:
I don’t pass any cache cache settings to the client.fetch call either.
It looks to me like the Next Data Cache isn’t being used when I deploy to Netlify, although it does appear to be working when I build and run locally. Have I made a mistake or is there a special consideration for getting caching working here?
If you see a slow request with the header cache-status: Next.js cache hit, then that’s an issue. Otherwise, it’s expected behaviour as you mentioned it takes 1.5-2s to fetch the data.
@hrishikesh I am referring to caching of the fetch requests made by the server to Sanity (Data Cache), not caching for the routes themselves (Full Route Cache). The routes in my app are not cached as they include dynamic data, however I expected the Sanity requests to be cached and I can see that they are in my dev environment; the timings I am referring to are for Sanity fetch calls, not for the overall request for the route.
That’s what gets stored in Blobs as far as I’m aware and you should be able to check that by Next.js cache status. Is that not the case in your application?
I am not getting a Next.js cache status header for the dynamic routes that have a path param (e.g. /assessment/[responseId]/start). Here are the relevant cache headers I see for those:
It’s possible to log the Data Cache status of the fetch requests in dev, but not in deployed apps unfortunately. As I mentioned originally though, the timings I am seeing for the fetch calls suggest to me that the fetch requests to Sanity are not being cached when deployed to Netlify, although I can’t be certain exactly what’s happening without that logging. I have confirmed that they are definitely being cached when I am running the server locally, though.
Just for comparison I have tried deploying the same app with no changes to Vercel and the Data Cache is working as expected there: an initial route load after deploying triggers some fetches by the backend to Sanity that take around 200ms, and then subsequent loads of that same route or any other route making those same Sanity fetches sees them consistently take around 10-20ms.
The cache headers for the dynamic route responses (e.g. /assessment/[responseId]/start) are always:
cache-control: private, no-cache, no-store, max-age=0, must-revalidate
x-vercel-cache: MISS
For the static routes like / it’s this on the first request:
@hrishikesh I have set the function region to ap-southeast-2, is there any possibility that could be affecting retrieving the cache data from blob storage?
Other than that I am stumped about what to do next. Do you have any thoughts based on what I’ve said so far? Is there anything else I should check?
If Next.js doesn’t cache it, Netlify won’t cache it either. What happens when you force the cache using the cache option in your fetch? I see you mentioned before you’re not doing it?
The cache headers comparision that you’re drawing with Vercel seems to be a different cache than what we’re discussing. My guess (and I can only guess as I don’t know the specifics of their setup), is that x-vercel-cache might compare to Netlify’s cache-status header. It would simply mean that the response was served from Vercel’s CDN cache and not Next.js cache (because as you pointed out, Next.js doesn’t cache this data).
If you absolutely want Netlify to cache your responses, you can specify the relevant cache control headers: Caching | Netlify Docs or even use Durable Cache: Caching | Netlify Docs
It doesn’t cache the route, it caches fetch requests made by the app on the server-side to Sanity CMS while Next renders the Server Components. I’m talking about a Next 14 specific concept called the Data Cache. Here is their documentation on the feature I’m talking about:
What I am talking about is not caching the responses sent to browsers by Next via Netlify, which is called the Full Route Cache in Next 14. My routes are dynamic routes by design and not eligible to be stored in the Full Route Cache and thus won’t have a cache-status header. I’m not expecting these routes to be cached by Netlify or Next and I don’t need or want them to be. I am, however, expecting the fetch requests made on the server to Sanity while generating those dynamic routes to be cached in the Data Cache. This is working on my machine and on Vercel but does not appear to be working on Netlify, which I have determined by logging the time taken to make those fetch requests on the server (roughly 10-20ms once cached, 2000ms+ when initially uncached).
It’s possible to log out the cache status of these server-side fetch requests in dev which I have done and which shows that the requests to Sanity are indeed being cached when I run the dev server. It isn’t possible to turn on this log in production unfortunately, so I am only able to go off the timings.
Does that make sense and could you please suggest what I might do to investigate the issue? I feel potentially there is a bug in the Netlify Next runtime around this.
Just to note that although the documentation I have linked above now says that fetch requests on the server are not cached by default, that is only as of Next 15:
In version 15, the default cache option was changed from 'force-cache' to 'no-store'. If you’re using an older version of Next.js, the default cache option is 'force-cache'.
So although I have not specified a cache setting in the server-side fetch requests they are being cached in my app as it’s using the default setting in Next 14 of force-cache.
Very confusing
Here’s an example of the server logs I see in dev when loading a route that makes two Sanity requests with Next’s fetch logging enabled.
From cold start:
[@monorepo/frontend]: │ GET https://xxxxxx.api.sanity.io/v2022-03-07/data/query/production?query=xxxxxx 200 in 501ms (cache skip)
[@monorepo/frontend]: │ │ Cache skipped reason: (cache-control: no-cache (hard refresh))
[@monorepo/frontend]: │ GET https://xxxxxx.api.sanity.io/v2022-03-07/data/query/production?query=xxxxxx 200 in 502ms (cache skip)
[@monorepo/frontend]: │ │ Cache skipped reason: (cache-control: no-cache (hard refresh))
After having loaded the route once and until I manually delete the .next folder to clear the cached data:
[@monorepo/frontend]: │ GET https://xxxxxx.api.sanity.io/v2022-03-07/data/query/production?query=xxxxxx 200 in 11ms (cache hit)
[@monorepo/frontend]: │ GET https://xxxxxx.api.sanity.io/v2022-03-07/data/query/production?query=xxxxxx 200 in 11ms (cache hit)
Sorry for the delay. I checked this with the devs as well, and I don’t have the most exciting news for you, I’m afraid. All I can say is, the caching seems to be working as intended - I can understand it’s probably not as per your expectations, but it is working.
This is how you can verify it:
Use the ModHeader (or any other tool to change browser headers)
Make a request to the problematic endpoints with a header named x-next-debug-logging and any value.
This should allow you to see server-timings in the dev tools, something like;
The presence of blobStore.get calls proves that caching is being used. However, I agree that the timing is on a higher end. I would not expect that to take 150ms. That’s also potentially what’s causing delays. I’m still checking witht he devs on what’s the expected timing here or if there are ways to improve the timing, but as far as fetch cache is concerned, that’s working.
@hrishikesh Great, thanks for the follow up. I was not aware of that server timings technique, that’s very helpful. I’ll compare the timings on us-east-2 and try out a few more tests.