ECONNRESET errors on *.netlify.app domains from GitHub Actions CI (started ~3 weeks ago)
Summary
We’re experiencing intermittent ECONNRESET errors when running Cypress E2E tests from GitHub Actions against our Netlify deploy previews. This started approximately 3 weeks to one month ago and affects all *.netlify.app domains, but our custom domain works perfectly. Based on our investigation, this appears to be related to changes on Netlify’s platform rather than our application code.
Environment
-
Site: beamjobs.netlify.app
-
Affected domains:
-
deploy-preview-*--beamjobs.netlify.app -
staging--beamjobs.netlify.app -
beamjobs.netlify.app
-
-
Working domain:
beamjobs.com(custom domain, routes through Cloudflare) -
CI Environment: GitHub Actions
-
Test Framework: Cypress E2E tests
The Problem
Tests fail with timeout errors due to connection resets during page load:
CypressError: Timed out after waiting `60000ms` for your remote page to load.
Your page did not fire its `load` event within `60000ms`.
Underlying cause:
Error: connect ECONNRESET 18.208.88.157:443
errno: -104, code: 'ECONNRESET', syscall: 'connect'
surfaced via cypress e2e spec runner when attempting to download chunked JS file:
cypress:server:request received an error making http request { timeout: undefined, retryIntervals: [], url: 'https://deploy-preview-1576--beamjobs.netlify.app/800.601656c0525ceba0.js', requestId: 'request130', retryOnNetworkFailure: true, retryOnStatusCodeFailure: false, delaysRemaining: [], err: Error: read ECONNRESET at TLSWrap.onStreamRead (node:internal/stream_base_commons:217:20) { errno: -104, code: 'ECONNRESET', syscall: 'read' } } +28ms
cypress:server:request exhausted all attempts retrying request { timeout: undefined, retryIntervals: [], url: 'https://deploy-preview-1576--beamjobs.netlify.app/800.601656c0525ceba0.js', requestId: 'request130', retryOnNetworkFailure: true, retryOnStatusCodeFailure: false, delaysRemaining: [], err: Error: read ECONNRESET at TLSWrap.onStreamRead (node:internal/stream_base_commons:217:20) { errno: -104, code: 'ECONNRESET', syscall: 'read' } } +1ms
The connection resets seem to happen at TCP connection establishment for various resources (fonts, JS chunks) - it’s not consistent which resources fail, but failures occur on every test run.
Evidence We’ve Gathered
-
Timeline: Tests worked reliably as recently as mid-October, now fail consistently
-
Old deploys now fail: We tested against deploy previews created months ago (early September) that previously passed - they now exhibit the same
ECONNRESETerrors -
Not our code: This rules out application changes, dependency updates, or build configuration
-
Environment isolation: Tests pass 100% when run against Angular dev server in the same GitHub Actions environment
-
Domain-specific:
-
Fails on all
*.netlify.appsubdomains -
Works perfectly on
beamjobs.com(custom domain)
-
-
Infrastructure differences:
-
Custom domain routes through Cloudflare (
server: cloudflare) -
Netlify subdomains hit Netlify edge directly (
server: Netlify) -
❯ curl -I https://beamjobs.com HTTP/2 301 date: Tue, 04 Nov 2025 21:15:33 GMT content-type: text/plain; charset=utf-8 location: https://www.beamjobs.com/ server: cloudflare strict-transport-security: max-age=31536000 x-nf-request-id: 01K98BK80EEGX4ZWBCW44TYP7C cf-cache-status: DYNAMIC report-to: {"group":"cf-nel","max_age":604800,"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v4?s=244AQvDcVyN4p5GosGXbUQNHsLJianEPxDHORKpyFSa%2FCc6g%2FvyLZLSQ1lbM32u95iZxtjW2973n31e70%2BjmW2s7ZnjGzaXIVgvx"}]} nel: {"report_to":"cf-nel","success_fraction":0.0,"max_age":604800} cf-ray: 9997209f2e4c79d9-ORD ❯ curl -I https://beamjobs.netlify.app HTTP/2 200 accept-ranges: bytes age: 0 cache-control: public,max-age=0,must-revalidate cache-status: "Netlify Edge"; fwd=miss content-type: text/html; charset=UTF-8 date: Tue, 04 Nov 2025 21:15:46 GMT etag: "00873476b8906d3e09f7cdd265c34e6e-ssl" server: Netlify strict-transport-security: max-age=31536000; includeSubDomains; preload x-frame-options: DENY x-nf-request-id: 01K98BKM9R7HZ8534Q0B827JX0 content-length: 791779 ❯ curl -I https://deploy-preview-1550--beamjobs.netlify.app HTTP/2 200 accept-ranges: bytes age: 1 cache-control: public,max-age=0,must-revalidate cache-status: "Netlify Edge"; fwd=miss content-type: text/html; charset=UTF-8 date: Tue, 04 Nov 2025 22:34:55 GMT etag: "32ede13a0fa0564a7fd02b3e3b79e919-ssl" server: Netlify strict-transport-security: max-age=31536000; includeSubDomains; preload x-frame-options: DENY x-nf-request-id: 01K98G4HEMR9S75VD5SDDQ1KE6 x-robots-tag: noindex content-length: 784711
-
What We’ve Ruled Out
-
Our application code (old commits which passed GHA CI in September now fail)
-
GitHub Actions network issues (local dev server works in same environment)
-
Parallel test execution (fails even with sequential execution)
-
Build/deployment issues (old deploy previews built and passing GHA CI in Sept now fail)
Our Hypothesis
Based on the evidence, we believe Netlify may have implemented changes to *.netlify.app subdomain handling around October 2025, possibly:
-
Rate limiting for CI/bot traffic
-
Connection throttling for certain IP ranges
-
Bot detection that’s flagging GitHub Actions IPs
The fact that our custom domain (routing through Cloudflare) works perfectly while direct Netlify connections fail strongly suggests the issue is on Netlify’s platform rather than our application or CI environment.
Questions
-
Were there any changes to rate limiting, connection handling, or bot detection for
*.netlify.appdomains in October 2025? -
Are GitHub Actions IP ranges treated differently than traffic from CDN providers like Cloudflare?
-
Are there different connection limits for
*.netlify.appsubdomains vs. custom domains? -
Is there a way to allowlist our site or GitHub Actions IPs to resolve this?
What We Need
We need to run our E2E tests reliably against deploy previews in CI. This worked perfectly for months and is now broken. Any guidance on:
-
What changed on Netlify’s end
-
Whether this is expected behavior
-
How to resolve or work around the issue
Would greatly appreciate any insights from the Netlify team or community members who may have experienced similar issues.
Thanks for your help!