Netlify incorrectly tampering requests from scheduled functions?

I am testing a scheduled function that simply writes a file to an S3 bucket. I am using the latest version of the official AWS SDK.

When I invoke the function locally with netlify dev and netlify functions:invoke test, it works perfectly.

However, in production, I always get the following error message:

May 16, 09:40:04 PM: INIT_START Runtime Version: nodejs:20.v22	Runtime Version ARN: arn:aws:lambda:us-east-2::runtime:b41f958332022b46328145bcd27dce02539c950ccbf6fde05884f8106362b755
May 16, 09:42:02 PM: 2c0cb6d2 INFO   Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
    at ClientRequest.setHeader (node:_http_outgoing:659:11)
    at addHeaders (file:///var/task/___netlify-bootstrap.mjs:1890:13)
    at patchedRequest (file:///var/task/___netlify-bootstrap.mjs:1916:7)
    at /var/runtime/node_modules/@aws-sdk/node_modules/@smithy/node-http-handler/dist-cjs/index.js:233:19
    at new Promise (<anonymous>)
    at _NodeHttpHandler.handle (/var/runtime/node_modules/@aws-sdk/node_modules/@smithy/node-http-handler/dist-cjs/index.js:189:12)
    at async /var/runtime/node_modules/@aws-sdk/middleware-flexible-checksums/dist-cjs/index.js:257:18
    at async /var/runtime/node_modules/@aws-sdk/node_modules/@smithy/middleware-serde/dist-cjs/index.js:33:24 {
  code: 'ERR_HTTP_HEADERS_SENT',
  '$metadata': { attempts: 1, totalRetryDelay: 0 }
}
May 16, 09:42:02 PM: 2c0cb6d2 Duration: 158.21 ms	Memory Usage: 26 MB

The relevant part of my package.json:

  "dependencies": {
    "@aws-sdk/client-s3": "^3.577.0"
  }

Version 3.577 is the latest. I have tried both the latest version of the AWS SDK and an earlier version.

The relevant part of my netlify.toml:

[functions."test"]
  schedule = "* */4 * * *"

I have tried both Node.js version 18.x and 20.x in the Netlify web UI.

The code of my function:

import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3'

export default async () => {
  const { MY_AWS_ACCESS_KEY_ID, MY_AWS_SECRET_ACCESS_KEY, MY_AWS_REGION } =
    process.env

  const s3 = new S3Client({
    region: MY_AWS_REGION,
    credentials: {
      accessKeyId: MY_AWS_ACCESS_KEY_ID,
      secretAccessKey: MY_AWS_SECRET_ACCESS_KEY,
    },
  })

  const command = new PutObjectCommand({
    Body: 'Hey!',
    Bucket: 'boletinde',
    Key: '/test.txt',
  })

  let response

  try {
    response = await s3.send(command)
  } catch (error) {
    console.log(error)
  }
}

The environment variables with my AWS credentials are correctly configured. My AWS region is eu-south-2.

My site is deployed at boletin.de.netlify.app (boletin.de). It builds successfully and the function is triggered as per the schedule.

I know that scheduled functions are in beta.

Does anyone know what could be going wrong?

1 Like

From what you’ve provided I only know what you likely already know.
Something in the function is trying to send a response after already having sent one.

However I don’t see obvious evidence of that in the sample code of your function.

Thanks for your response, @nathanmartin!

Actually, there would be no issue with sending multiple s3.send() requests successively. For example, to upload multiple files to S3 from the same function.

To me, this doesn’t seem to be an error with the AWS SDK, which is a mature library maintained by Amazon. Nor does it seem to be an issue with my code, as it’s simple and works in my local environment. I might be wrong, but I would say it’s a problem with Netlify’s runtime environment.

The call stack in the error trace shows how the execution flow is passed to @aws-sdk, but then it is intercepted by the Netlify runtime environment, which seems to patch the request (patchedRequest) and add new headers (addHeaders), all from /var/task/___netlify-bootstrap.mjs:

    at addHeaders (file:///var/task/___netlify-bootstrap.mjs:1890:13)
    at patchedRequest (file:///var/task/___netlify-bootstrap.mjs:1916:7)

And that’s where the execution fails and stops.

The sending of the request to S3 is entirely managed by the AWS SDK, which first sets the request headers and immediately sends its body. It seems that something is then trying to send additional headers. There is nothing in my code—nor can there be—that causes this interference.

That’s why I think there is some kind of issue in Netlify’s runtime environment.

@JaimeObregon I was only addressing your open ended question, as from my experience ERR_HTTP_HEADERS_SENT means precisely as it says:

Cannot set headers after they are sent to the client

You’ll see as much if you google around.

But it sounds like you’re speaking from a position of authority so you likely know more than I do.

Note: I don’t work for Netlify.

This is currently being investigated as a few more folks have also reported this issue.

2 Likes

Thanks for the update, @hrishikesh! Please, let me know if you’d like me to set up a minimal test case.

I have exactly the same issue with Next.js server actions. Same error, also put command with AWS SDK. However Get command works without issues.

We’ve worked on a fix and we’ll be rolling out in the coming week.

3 Likes

I have the same issue, also aws put command.

@hrishikesh would it be possible to update the thread if you know the specific day this might land? Many thanks!! :pray:

I rolled back to a previous build for now - S3 works fine there.

Unfortunately my product is not very useful to folks at all without the S3 uploads and if I am using the previous build, I can’t update the code at all.

Right now, trying to figure out if I should move this particular endpoint off Netlify, esp if this fix is a major one and might take longer than a few days for you guys.

I have the same issue

I’m saving JSON, so I’ve just set a Lambda endpoint in the meantime. If you save binary files, you might want to consider signed URLs for upload, which would be an overall better solution anyway, cause you save on bandwidth, or upload via API Gateway to S3. This post seemed most useful for me, you might want to check it out, if you decide to move off: Guide: Upload Files to S3 via API Gateway

1 Like

Ah much gratitude! I have a weird situation where i can’t use signed URLs, but switching to API gateway worked perfectly. Thank you :slight_smile:

1 Like

Thank you for your feedback, @hrishikesh! It’s been 10 days since the rollout of the fix began, but I’m still experiencing the issue. Could you please confirm if the deployment is complete or provide an ETA?

I appreciate the support from you and your team!

Hi all,

Sorry for the delay. It looks like a fix has been rolled out. Could you please give it another try and let us know if you still encounter any issues?

1 Like

Problem solved!

Thank you, @Melvin and team! I can confirm that the issue is now fixed, and the scheduled function has been working properly for the past few hours :smiley:.

Thanks to everyone for listening and resolving it.

1 Like

Thanks so much for confirming, glad to hear it’s working!