API update a single file

I have 100’s of small html files. I am trying to update a single file via API.
I create a sha of that single file.
{
“files”: {
“/changedFile001.html”: “ee4f5f927382699870b3e9182f243f44850af8b7”
}
}

and end it to : https://api.netlify.com/api/v1/sites/(siteId)/deploys
get a deployId from response

then update the content
POST https://api.netlify.com/api/v1/deploys/(deployId)/files/

I have my site set to manual deploys.

And all my old files were removed. Is there an option to keep old files as it is.

Hi @shrv,

I believe, you need to send the hashes of all the files in the previous deploy along with the new file. You need not actually upload those files, just send their hashes.

ok., got it … but checking … if there was simpler api … like filesUpdateDeploy
files:{ add:{ “a”:“sha…”}, remove:{ “b”:1} , update:{ “z”:“sha…”} }
and then i only send “a” and “z” in upload
given that if i have 100000’s of files
and i am updating just 3 files; 1 add , 1 remove, 1 update

No, there’s no easier API.

You could get a list of all your files from deploy/deploy-id/files endpoint using a GET request, then modify that array as you see fit and send over the updated array to the POST request.

what if there are parallel deploys

Could you elaborate on that? What exactly do you mean by parallel deploys?

  1. Thread1 : read deploy/deploy-id/files … 1000 files now… add 2 files, 1000+2 = 1002
  2. Thread1 : add 2 files … a,b … not uploaded yet
  3. Thread2 : another thread… read deploy/deploy-id/files … 1000 files now, add 1 files, 1000+1 = 1001
  4. Thread2 : add 1 files … c … not uploaded yet
  5. Thread1: uploads 2 files a,b
  6. Thread2: uploads 1 files c
    Deploy Done
    Deploy Done

// Invalid State… files lost …
files:{ add:{ “a”:“sha…”}, remove:{ “b”:1} , update:{ “z”:“sha…”} }
and then i only send “a” and “z” in upload
given that if i have 100000’s of files
and i am updating just 3 files; 1 add , 1 remove, 1 update

With this kind of approach
since we are not sending all the files we are not harming the files list

The way our API works is, as soon as you send a POST request with the list of files for that deploy, the deploy is created and it enters in the uploading state. There, it will wait indefinitely till all the files mentioned in the POST request are uploaded or till the deploy is cancelled. So, I don’t think the above approach should have any impact.

Ideally, you should fetch the files from your latest deploy, store it in a variable, then send a POST request to create a new deploy with the new files and then the uploading can go on however you wish. As long as the files with the same hash correctly upload, you’re good to go.

Do note that, sooner or later you’re going to run into issues with deploying via API. With extremely large sites, the API is unable to respond within 28 seconds for the list of files in a deploy. In that case, you’ll simply get a 500 error. I don’t have an exact number of files that cause this issue, but I’ve seen it happen in the past. If you’re able to fetch the list of files well within 28 seconds, you’re safe. But if it’s taking closer to that, say 25 seconds to be safe, you might want to consider deploying via Git.

1 Like

may be you all should consider to add this api

githooks … works in similar way . .tell … list of changed files … then upload/read if necessary.

Thanks

I can add it as a feature request, but I can’t say if/when that will change. Given that, it’s a core API used by a lot of our services, I won’t expect it to change any time soon.

To be honest, it feels like the API is working like Git Hooks like you describe: You send a list of files and only upload the changed/new files. I personally don’t see a problem with this approach.

What i am saying is … let’s there be both the approaches.
This would reduce timeouts and large files list hash issue.

we are happy to record that feedback! we’ll let you know if we make changes to the API in the future :smiley:

Hey @shrv,

You can find out about available positions and apply for one here:

1 Like

Any (rough) estimated time to complete.

Hi, @shrv. We don’t give official ETAs for when a feature will available. We will announce it only once it is available - not before.

For this feature request, I don’t think this will ever happen. It would take a complete design of how our service works so it is very possible that this feature will never be added.

The design choice preventing this change is “atomic deploys”. We do not allow the changing of an existing deploy.

To make change a complete new deploy is required. To make a new deploy all the files in that specific deploy must be defined.

The feature request you have is to be able to send the details for only a single file and to make a new deploy that way. This just isn’t possible with the current design.

The feature request you want sounds like this workflow:

  • Send the API a list of changes only.

To do this you would need a copy of the existing deploy locally and be able to compare the checksums of all the files in the new deploy to the deploy that was already published. Without this, you wouldn’t know what changed.

So, even the feature request requires calculating all the checksum for all files in the deploy locally. The only difference is that you wouldn’t send those checksums to Netlify but you still needed to do the work to generate them.

Now, the feature request says “just assume the checksums from the previous deploy”. However, consider this edge-case.

Say that someone else on your team and makes a new deploy while you are checksumming a previous deploy. In this new deploy they change 10 files and that is now the production site. You local checksumming doesn’t know about these file changes. You are working with an old deploy and you don’t know it. You the complete the deploy and change the files you want to change, however, you now also are now using the 10 changed files your teammate just upload but you don’t know that you. Uncertainty is being introduced because assumption are being made and those assumption can change without you knowing about it.

This problem (a deploy you don’t know about) is completely prevented by requiring the checksum of all files for every deploy. If you send all paths and checksum there is zero uncertainly.

Because your feature request has this new unhandled failure edge-case, it is unlikely to ever be implemented. The only way to avoid that edge-case is to always send all checksums. Sending all checksum is the existing API workflow. The only solution for the new edge-case is to not make the change in the first place.

In other words, there is a baked in bug with the new feature request and the solution for that bug is the current API behavior. Again, it is very unlikely that this will ever change for all the reasons listed above.

If there are other questions about this, please let us know.

1 Like