Send response before stopping function

Hi~

I want to use Netlify Functions to handle Slack commands but these functions will need to perform database read and write and will most likely end up exceeding the Slack 3000ms limit to give an answer (which trigger an operation_timout on the Slack side).

So in order to do so I need to send Slack an empty 200 before ending the function process which in pratice look like this:

exports.handler = async (event, context) => {
  performHeavyOperationAsynchronously(); // database read write and such

  // don't wait for previous function to end and just fire the 200 answer
  return { statusCode: 200, body: "" }; // send empty 200
}

So while this code run nicely in development (using netlify dev) when in production on Netlify it appears to not launch / keep processing what’s inside performHeavyOperationAsynchronously() (because perhaps there’s some sort of a process.exit() after the handler return answer)

Any idea how to fix that ? (or if this is even possible?) / what’s the cause of the process not keeping processing the async function after function return?

Thanks~!

Hi, @lihbr and welcome to our Netlify community site.

Does the 200 response occur but the other processing does not complete? If so, it is possible it is hitting the default 10 second timeout documented here:

https://docs.netlify.com/functions/overview/#default-deployment-options

Quoting:

By default, all serverless functions are deployed with:

  • us-east-1 AWS Lambda region
  • 1024MB of memory
  • 10 second execution limit

If the Functions plan for the site is upgraded to Level 1 (or higher) we can increase this timeout to a maximum value of 28 seconds.

​Please let us know if there are other questions and/or if you would like the timeout increased.

Hi @luke,

Yes indeed the 200 response occur but the processing that should continue further that answer is not performed, though I do not think that this has something to do with the 10 seconds timeout as what’s behind the performHeavyOperationAsynchronously() so far is just a ping back to Slack using node-fetch (fire and forget style) which last less than a second when running the same function locally with netlify dev.

Hi again,

Here’s a more concrete example of a “fire and forget” async code that does work in local with netlify dev but does not in production on Netlify (my site is this one if you want to / can check: pre-renne-2019-st4ging with the test.js function), hope that’ll help:

require("dotenv").config();
const fetch = require("node-fetch");

exports.handler = async (event, context) => {
  fetch("https://slack.com/api/chat.postMessage", {
    headers: {
      "content-type": "application/json",
      authorization: `Bearer ${process.env.SLACK_BOT_OAUTH_TOKEN}`
    },
    method: "POST",
    body: JSON.stringify({
      channel: "CQ21G28P5",
      text: "hey from netlify functions"
    })
  });

  return { statusCode: 200, body: "" };
};

Basically the return 200 works but not the previously launched async code when running production :thinking:

Hi @lihbr , You need to use the await keyword on your fetch call in order for JS to wait for it to resolve before moving on with the rest of the code.

Hi @futuregerald, thank you for your answer.

I know that if I use await the code will wait for that async load to be performed but that’s not my point here. I want to fire that async load (in my example a fetch but that could be anything else), forget about it and send back response to client directly (without waiting for that async load to end but it should be still performed in the background after the response was sent to client while remaining under the 10s limit).

As pointed out my example here work perfectly fine when used with netlify dev when developing but does not once in production~

Sorry, I didn’t understand your use-case before. The fetch call should be sent but the function won’t wait for it, this is what you intend. Can you try removing the async keyword on the function so that it doesn’t return a promise and see if that does what you intend?

Ok so apparently doing it does work when doing it the callback way (see code below), kinda weird though that the return way does not work in production as it should be the same? If you have any clue on why it changes something let me know, thanks anyway :slight_smile:

require("dotenv").config();
const fetch = require("node-fetch");

exports.handler = (event, context, callback) => {
  fetch("https://slack.com/api/chat.postMessage", {
    headers: {
      "content-type": "application/json",
      authorization: `Bearer ${process.env.SLACK_BOT_OAUTH_TOKEN}`
    },
    method: "POST",
    body: JSON.stringify({
      channel: "CQ21G28P5",
      text: "hey from netlify functions"
    })
  });

  callback(null, { statusCode: 200, body: "" });
};

As far as I can tell, the ‘return’ syntax is meant only for async lambda functions and that if you don’t intend to return a promise, then you’ll want to use a callback instead. In any case, I’m glad you found a solution that works for you.

Hi @Dennis I know I am late to the party… Can’t believe it’s been over a year already, but I wanted to follow up now considering that the documentation recommends against using callbacks. (As seen in your post here)

Documentation and forum post both recommend to use async, but as far as I understood, the use case

  1. Return 200 response

  2. Continue executing code within the 10s timeframe and the memory limit

only works with callbacks?

Will callbacks be deprecated anytime soon, or is it safe to use them for now?

Thank you!

No, the async format supports the same things as the callback format. The only difference is the former is asynchronous. So, those two points you mention are supported in both formats.

That said, while we recommend the async format, the callback format should continue to work as long as AWS lambda supports it.

1 Like

I’m not sure I can completely agree on this one. The reason I asked about callbacks and their support is, that with the asynchronous functions there is no way to return the response to the client and then continue to run a task on the backend. (Within both memory and time limit).

Case in point:

Fire a post request. Don’t care about the outcome, however want to make sure it goes through. There’s two ways to go about it. If we use await to get the response of the post request, we are delaying the response to the user. If we don’t await the response from the request, either of two things happen:

When using netlify dev:
The returned response is sent back. 1000ms.
The request executes correctly, the function finishes running in 3000ms.

When in production:
The returned response is sent back. 1000ms.
The function ends immediately, the request never gets sent out.

Therefore, the above suggestion was to use callbacks to not terminate the function immediately on returning the promise as with the async version of it.

This is the exact issue that I ran into too, that netlify dev works as expected with the async version, however actual production doesn’t :confused:

Hey @julianengel :wave:t2:

Cross-tagging this post with another that tagged this one RE: Functions processing after returning / sending back a response.

1 Like

Hi @jonsully!

Thank you so much for linking to that, this answer my question completely! It’s exactly what I was trying to achieve, what seems to not be possible as Lamdas are not designed this way.

However, one small comment to your answer over there. The workaround is exactly what I am trying to do as suggested, however I wasn’t using netlify background functions as they weren’t available when we did our infrastructure design.

Instead:

Netflix function processes the request, connects to the database and does whatever it does. Once it’s finished, just before returning it’s reply to client, it was meant to fire a Fire and Forget request to our logger (an internal tool on a server full server in AWS EC2).

I don’t care what the reply is from the logger (as you said netlify background functions wouldn’t return it either).
The problem I ran into is, that there was no way to avoid waiting for the “acknowledgment” that the logger has received the reply.
When running Netlify Dev, the function normally returned, and then 1 second later we got the confirmation that the non-async code had successfully fired off to the logger.
Locally, this worked like a charm.

The problems started to appear in production, whereby the server would log that the fire and forget function started and completed, but we never actually received any trace on the logger, and the function was killed.

The problem at hand is that awaiting the reply, adds around 700-800 ms, which significantly slows down the UI experience for the user.

Question at hand, when firing a request to the background action (when using Netflix background actions), could this time to initial response he decreased?

Hey @julianengel :wave:t2:

I wasn’t using netlify background functions as they weren’t available when we did our infrastructure design.

Totally understand :rofl: yay for new things

The problem I ran into is, that there was no way to avoid waiting for the “acknowledgment” that the logger has received the reply.

Yeah, and that’s fundamentally how it had to work. Understand that when you call the fetch() function from your synchronous, running JS inside a Lambda function without calling await, all it’s actually doing is putting that function on the call stack. I’m presuming that Lambda runtimes are a single thread and very-much call stack aware… so once your main stack concluded, the runtime axed the session without running any of the other functions on the stack. E.g. your async function would simply never run.

So you had to call out synchronously to your ‘other service’ (in this case your logger, which in turn may have had to chain off other HTTP requests in synchronous format). Effectively you end up creating an HTTP call chain that has to complete fully across the chain before you can respond to your (presumed) web-browser-based user that kicked off the first request. Not the best :sweat_smile:

With the new Background Functions, you still have to await the fetch() / HTTP call from within your first Function – that’s a feature of Lambda we can’t avoid – but the Background Functions framework was built to respond fast. Super fast. All it does is receive your request and respond with a 200. Then it kicks off your background job. It was created to be a layer between “I want to do this thing” and “I’m doing this thing” so that you could establish a desire to do a thing without having to wait for the thing to get done :stuck_out_tongue:

So tl;dr: you’ll still need to await your call to the background function from your main function, but that await should respond way, way, way faster than your prior attempts which had to fully run the logger job before returning.

I hope that helps!


Jon

I might throw this in here too - the folks at RedwoodJS made repeater.dev. It’s a background job service similar to Netlify’s new Background Functions (in the sense that it responds quickly then does work as described above) but adds additional functionality like scheduled/repeating job and job status checking etc.

Worth a look IMO if you’re looking for something in this space:

Hey Jon!

I second the ‘YAY for thew things’!

Thank you so much for your detailed explanation and all the insight into the process, I’ll see what I can do to move everything over to background functions (shouldn’t be too tough as everything runs on express routes anyways at this point).

Also, thank you for the link, this is something we actually implemented on our serverfull architecture, a type of boomerang service! This would have saved us quite some time haha :slight_smile:

Thank you so much again, I really appreciate the help!

2 Likes