We recommend under 10k redirects. Under 1k is a best practice.
That many redirects will ABSOLUTELY impact your service time on EVERY asset. It’s not about what’s cached or not - it’s about our need to parse 25k redirects on every request to see if it matches. We do cache where we can, but parsing/matching is the time drain.
I looked at your redirects and I don’t see any easy wins (I was hoping to see a pattern like:
/a/b -> /x/b
/a/c -> /x/c
…that I could advise you on collapsing. Of course I didn’t analyze 25k redirects, so you might want to make sure there’s nothing you can optimize further.
If you can’t do that, perhaps you can split the site into a few different ones, using a workflow like this?
While it talks about multiple repos one site, you can also use proxying for one repo multiple sites - and if you could “balance” the redirects across them in some way, that would help speed things up on each site.
Another potential optimization is using some client side redirects instead of the thousands of 301’s you have in our redirects. If you’re doing 200 redirects, that isn’t an option - but for 301’s, you could use client side javascript to do the redirect. It will slow down the request a bit - two pageloads - but if you scope that implementation to the more rarely used paths, the good of the many (all site access) will be improved in performance, while the experience of the few (who visit the rare pages) would be impacted a slight bit (but maybe no net loss, since no slow ttfb?)