Limit the size of files uploaded with Signed URLs on Google Cloud Storage

Our users need to upload an image to modify their account avatar or the banner to be used for an event. Leaving aside image cropping, resizing and format – which are more than enough for a separate post –, we will focus on uploading files using Signed URLs on GCS.

There are several reasons to use Signed URLs for uploading files:

  • It's a secure feature managed by Google.
  • Extracting big files out of our platform makes it faster to scale down, since we would not have to wait for long-running data transfers before killing a pod on the Kubernetes cluster (we respond in less than 200ms for 95% of transactions under the new system).
  • We don't assign permissions to our users on any bucket. Just sign a URL that is valid for x minutes to upload a file with this Content-Type.
  • GCS is cheaper than the alternative: CPU cycles + Memory + GCE bandwidth + GCE storage to receive these requests on our servers.

The only downside: it's unholy hard.

PUT requests with resumable uploads

There are two options to upload content with Signed URLs: POST (with HTML form encoding) and PUT requests, that can be executed as Fetch requests from JavaScript code. Version v4 documents how to do the latter with resumable uploads, in which the library executes an initial POST request transparently from your server, and results in a URL that can be used with a PUT fetch from client code. There are some code examples here.

The main problem: you cannot set Content-Length on a Signed URL to restrict the generated link to a maximum file size, because Content-Length is ignored with PUT requests. We can instead use X-Upload-Content-Length with resumable uploads generated by the server (Java, in this example):

Map<String, String> extensionHeaders = new HashMap<>();
extensionHeaders.put("X-Upload-Content-Length", "" + contentLength);
extensionHeaders.put("Content-Type", "application/octet-stream");

var url =
  storage.signUrl(
    blobInfo,
    15,
    TimeUnit.MINUTES,
    Storage.SignUrlOption.httpMethod(HttpMethod.PUT),
    Storage.SignUrlOption.withExtHeaders(extensionHeaders),
    Storage.SignUrlOption.withV4Signature()
  );

While doing this we can validate the value received for contentLength, and reject if it's too big. The generated URL will require the same size for the uploaded file. On the client side (Typescript):

const response = await fetch(url, {
  method: 'PUT',
  headers: {
    'X-Upload-Content-Length': `${file.size}`,
    'Content-Type': 'application/octet-stream',
  },
  body: file,
});

We also need to assign a CORS policy on our bucket:

[
  {
    "origin": ["https://your-website.com"],
    "responseHeader": [
      "Content-Type",
      "Access-Control-Allow-Origin",
      "X-Upload-Content-Length",
      "X-Goog-Resumable"
    ],
    "method": ["PUT", "OPTIONS"],
    "maxAgeSeconds": 3600
  }
]

...and that's it! Now we are uploading files straight to GCS with a limit on file size. A similar validation can be done for Content-Type or any other headers, if needed.

At Koliseo we are putting together the next generation of tools for event management. If you are interested in what we are building and how, make sure to follow us on Twitter.