Good question and happy to help out with a staff point of view.
Executive summary: At a high level, our standard CDN has a lower cache hit ratio than the high-performance edge, and the larger number and better distribution of CDN nodes on each network will combine with that to in general give better performance on our high performance edge.
Stats change over time, but in the past hour, here’s a snapshot of what I see across each of those CDN’s:
- average response time (we can’t measure TTFB since that is from a browser point of view; instead we measure time from receiving the request, to sending the last byte) is around 100ms across tens of millions of requests, with average size of 17,859 bytes for those assets.
- for cache misses only, that’s around 250ms with average asset size of 17,669 bytes
- for cache hits only, that’s around 80ms with average asset size of 18,128
- cache hits are around 80% of that traffic.
- average response time is also around 100ms across millions of requests, with average size of 26,447 bytes for those assets.
- for cache misses only, that’s around 350ms with average asset size of 31k
- for cache hits only, that’s around 50ms with average asset size of 25.7k
- cache hits are around 93% of that traffic.
I didn’t do any deep analysis of why this hour had a slower cache miss average send timing, but that could be based on a lot of things - the kinds of sites customers host on that CDN are typically different than the ones hosted on our regular CDN (deploy more frequently, have more assets, etc). I’d bet there are a much higher percentage of files that we’d never cache such as large zips or PDF’s (we do not try to cache really large files on any of our CDN’s except those handled by our large media service.
To answer your direct questions from the followup:
@support_staff, what TTFB range would be considered acceptable for a cache miss on Netlify? I’ve seen the other community articles about TTFB and they don’t seem to answer that question.
We don’t really specify a “standard,” but we in general don’t really try to debug deeply if it is under around a second. I would expect it to be over a second for the very large files I describe above, but if your file are all under say 3 MByte they will all be quite cacheable/should serve fairly quickly. I don’t have an exact number where we cut off caching.
If you saw a higher TTFB than a second on average, we’d be happy to look into it at any account level. If you’re on the high performance network, you pay enough that we will look into it for lower numbers too - we just can’t afford to deeply debug slightly suboptimal behavior (which quite often turn out to have a cause like “your internet is slow; your site visitors on average see a total response time of 100ms”) for someone who is paying us $0 or $19/mo. We will take a look, but we will almost certainly ask for your help in showing the problem, particularly if our logs show uniformly fast sends. In about 10% of the reports we see, a DNS misconfiguration or other config setting is causing the problem and the situation vastly improves with a simple config change - so it always makes sense to ask if you see something you think should be better!
At what point would you say it’s in the acceptable / supported range and decline to fix it? 400ms?
Our support team doesn’t do the fixin’; our network engineers will make the call as to what is needing fixing on a case-by-case basis after we escalate to them. We would be unlikely to escalate anything under 400ms even for our VIP customers, though this like everything is open to debate based on circumstances.