How to Handle File Uploads Securely in Cloudflare Workers
#cloudflare
#workers
#r2
#security
#uploads
Secure file uploads are deceptively tricky. You need to defend against oversized bodies, malicious payloads, spoofed MIME types, and abuse, while keeping storage private and controllable. This guide shows two safe patterns for Cloudflare Workers, with R2 as the backing object store:
- Small uploads via a Worker endpoint that streams directly to R2 after validation.
- Large uploads via pre‑signed URLs so the browser sends data straight to R2, bypassing Worker body limits.
It also covers CORS, authentication, content-type sniffing, rate limiting, and post-upload scanning.
What you’ll need:
- A Cloudflare account with Workers and an R2 bucket
- Wrangler installed locally if you want to deploy
- Basic familiarity with JavaScript/TypeScript
Key principles
- Never trust client-declared metadata. Validate filename, size, MIME type, and magic bytes.
- Prefer direct-to-R2 for large files to avoid Worker request-size limits and memory pressure.
- Keep buckets private. Serve or download via authenticated Workers or signed URLs.
- Throttle abuse with Turnstile, rate limiting, and per-user quotas.
- Scan or quarantine uploads before exposing to end users.
- Setup: R2 binding and Worker entry Example wrangler.toml snippet:
name = "secure-uploads"
main = "src/worker.ts"
compatibility_date = "2025-11-01"
r2_buckets = [
{ binding = "UPLOADS", bucket_name = "uploads" }
]
[vars]
MAX_UPLOAD_BYTES = "5242880" # 5 MiB example for Worker-handled uploads
ALLOWED_MIME = "image/png,image/jpeg,application/pdf"
ALLOW_ORIGIN = "https://your-frontend.example"
- Small files: validate, stream to R2, keep private This endpoint accepts multipart/form-data, validates the file, and streams it to R2. It assumes you authenticate requests (for example, a session cookie or a JWT) and optionally protect your upload form with Turnstile.
src/worker.ts:
type Env = {
UPLOADS: R2Bucket
MAX_UPLOAD_BYTES: string
ALLOWED_MIME: string
ALLOW_ORIGIN: string
}
const ALLOWED = new Set(["image/png", "image/jpeg", "application/pdf"])
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url)
// Minimal CORS
if (request.method === "OPTIONS") {
return corsPreflight(request, env)
}
if (url.pathname === "/upload" && request.method === "POST") {
return uploadHandler(request, env)
}
if (url.pathname.startsWith("/files/") && request.method === "GET") {
// Private download route (auth required in real apps)
const key = url.pathname.replace("/files/", "")
return streamFromR2(key, env)
}
return new Response("Not found", { status: 404 })
}
}
function corsHeaders(env: Env) {
return {
"Access-Control-Allow-Origin": env.ALLOW_ORIGIN,
"Access-Control-Allow-Credentials": "true",
"Vary": "Origin",
}
}
function corsPreflight(request: Request, env: Env): Response {
const reqHeaders = request.headers.get("Access-Control-Request-Headers") || ""
const reqMethod = request.headers.get("Access-Control-Request-Method") || ""
return new Response(null, {
status: 204,
headers: {
...corsHeaders(env),
"Access-Control-Allow-Methods": "POST, GET, OPTIONS",
"Access-Control-Allow-Headers": reqHeaders,
"Access-Control-Max-Age": "600",
},
})
}
async function uploadHandler(request: Request, env: Env): Promise<Response> {
// TODO: replace with real auth
if (!isAuthorized(request)) {
return json({ error: "Unauthorized" }, 401, env)
}
const maxBytes = Number(env.MAX_UPLOAD_BYTES || "5242880")
// Parse multipart/form-data
const form = await request.formData()
const file = form.get("file")
if (!(file instanceof File)) {
return json({ error: "Missing file field" }, 400, env)
}
// Size checks
if (file.size === 0) return json({ error: "Empty file" }, 400, env)
if (file.size > maxBytes) return json({ error: "File too large" }, 413, env)
// Declared MIME may be blank or spoofed
const declaredType = (file.type || "").toLowerCase()
if (!declaredType || !ALLOWED.has(declaredType)) {
// Continue to sniff; do not trust solely
}
// Sniff magic bytes from file head
const head = new Uint8Array(await file.slice(0, 16).arrayBuffer())
const sniffed = sniffMime(head)
if (!sniffed || !ALLOWED.has(sniffed)) {
return json({ error: "Unsupported or unsafe file type" }, 415, env)
}
const ext = extFromMime(sniffed) || "bin"
const key = `quarantine/${crypto.randomUUID()}.${ext}`
// Optionally compute a digest for deduplication; avoid for large sizes
let sha256Hex: string | undefined
if (file.size <= 10 * 1024 * 1024) {
const buf = await file.arrayBuffer()
const digest = await crypto.subtle.digest("SHA-256", buf)
sha256Hex = [...new Uint8Array(digest)].map(b => b.toString(16).padStart(2, "0")).join("")
}
// Stream to R2; no buffering in Worker memory
await env.UPLOADS.put(key, file.stream(), {
httpMetadata: { contentType: sniffed },
customMetadata: {
originalName: safeBaseName(String(form.get("filename") || "")),
sha256: sha256Hex || "",
declaredType,
uploadedAt: new Date().toISOString(),
// Add userId/accountId after auth
},
})
// Return minimal info; do not expose public URLs
return json(
{
key,
contentType: sniffed,
size: file.size,
status: "queued_for_scanning",
},
201,
env
)
}
function isAuthorized(request: Request): boolean {
// Replace with real checks (JWT, session, mTLS, etc.)
const auth = request.headers.get("authorization") || ""
return auth.startsWith("Bearer ")
}
function sniffMime(head: Uint8Array): string | null {
// JPEG
if (head[0] === 0xff && head[1] === 0xd8 && head[2] === 0xff) return "image/jpeg"
// PNG
if (
head[0] === 0x89 &&
head[1] === 0x50 &&
head[2] === 0x4e &&
head[3] === 0x47 &&
head[4] === 0x0d &&
head[5] === 0x0a &&
head[6] === 0x1a &&
head[7] === 0x0a
)
return "image/png"
// PDF
if (head[0] === 0x25 && head[1] === 0x50 && head[2] === 0x44 && head[3] === 0x46) return "application/pdf"
return null
}
function extFromMime(m: string): string | null {
if (m === "image/png") return "png"
if (m === "image/jpeg") return "jpg"
if (m === "application/pdf") return "pdf"
return null
}
function safeBaseName(name: string): string {
return name.replace(/[^\w.-]+/g, "").slice(0, 128)
}
async function streamFromR2(key: string, env: Env): Promise<Response> {
// Replace with real auth/authorization
const obj = await env.UPLOADS.get(key)
if (!obj) return new Response("Not found", { status: 404 })
const headers = {
"Content-Type": obj.httpMetadata?.contentType || "application/octet-stream",
"Content-Length": obj.size.toString(),
...corsHeaders(env),
}
return new Response(obj.body, { headers })
}
function json(data: unknown, status = 200, env: Env): Response {
return new Response(JSON.stringify(data), {
status,
headers: { "Content-Type": "application/json; charset=utf-8", ...corsHeaders(env) },
})
}
Notes
- R2 keys are placed under a quarantine/ prefix so you can keep them private until scanned.
- The handler streams file.stream() to R2 to avoid buffering in memory.
- MIME is validated by magic bytes, not just the declared type.
- Keep the bucket private; do not return a direct public URL.
- Large files: direct-to-R2 with pre‑signed URLs For larger uploads, have your Worker mint a short‑lived signed URL (or a Presigned POST) so the browser uploads directly to R2’s S3-compatible endpoint. This avoids Worker body size limits and timeouts.
High-level flow:
- Client calls POST /upload-url with metadata (intended type, size).
- Worker authenticates, validates size/type, and returns a pre-signed URL plus any required fields.
- Client uploads straight to R2 using fetch or a form POST.
- Client notifies your app that the upload completed, or you rely on server-side verification (head object) and enqueue for scanning.
Minimal Worker code to create a pre‑signed URL using AWS SDK v3 presigner (bundling note: keep dependencies small; consider a separate signing service if bundle size matters):
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3"
import { getSignedUrl } from "@aws-sdk/s3-request-presigner"
type Env = {
R2_ACCOUNT_ID: string
R2_ACCESS_KEY_ID: string
R2_SECRET_ACCESS_KEY: string
R2_BUCKET: string
ALLOW_ORIGIN: string
}
const s3ForR2 = (env: Env) =>
new S3Client({
region: "auto",
endpoint: `https://${env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: env.R2_ACCESS_KEY_ID,
secretAccessKey: env.R2_SECRET_ACCESS_KEY,
},
})
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url)
if (request.method === "OPTIONS") return new Response(null, { status: 204, headers: cors(env) })
if (url.pathname === "/upload-url" && request.method === "POST") {
const body = await request.json().catch(() => ({}))
const { contentType, size } = body as { contentType?: string; size?: number }
// Validate inputs and enforce quotas
if (!contentType || typeof size !== "number" || size <= 0 || size > 200 * 1024 * 1024) {
return json({ error: "Invalid parameters" }, 400, env)
}
// Whitelist types again
if (!["image/png", "image/jpeg", "application/pdf"].includes(contentType)) {
return json({ error: "Unsupported content type" }, 415, env)
}
const key = `quarantine/${crypto.randomUUID()}.${contentType.includes("png") ? "png" : contentType.includes("jpeg") ? "jpg" : "pdf"}`
const client = s3ForR2(env)
const cmd = new PutObjectCommand({
Bucket: env.R2_BUCKET,
Key: key,
ContentType: contentType,
// You can attach metadata here; do not put secrets in metadata
Metadata: { uploadedAt: new Date().toISOString() },
})
// Short expiry to reduce replay risk
const urlSigned = await getSignedUrl(client, cmd, { expiresIn: 60 })
return json({ url: urlSigned, key, contentType, maxBytes: size }, 200, env)
}
return new Response("Not found", { status: 404 })
}
}
function cors(env: Env) {
return { "Access-Control-Allow-Origin": env.ALLOW_ORIGIN, "Access-Control-Allow-Credentials": "true" }
}
function json(data: unknown, status: number, env: Env) {
return new Response(JSON.stringify(data), { status, headers: { "Content-Type": "application/json", ...cors(env) } })
}
Client-side usage:
- Request a URL:
const res = await fetch("/upload-url", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ contentType: file.type, size: file.size }),
})
const { url, key } = await res.json()
- Upload directly:
await fetch(url, {
method: "PUT",
headers: { "Content-Type": file.type },
body: file, // the File/Blob
})
- Notify your app (optional) so it can enqueue scanning for key.
- Abuse prevention and quotas
- Authentication: Require a logged-in session or token on upload endpoints and URL-minting endpoints.
- Turnstile: Protect your upload UI with Turnstile to deter bots.
- Rate limiting: Use Cloudflare’s rate limiting rules at the edge for coarse limits; within Workers, maintain per-user quotas with Durable Objects or KV-backed counters.
- Size caps: Enforce server-side maximums per user, per file, and per day.
- Names and paths: Do not trust user-provided filenames. Generate keys server-side.
- Post-upload scanning and quarantine You generally cannot run antivirus engines inside Workers. Common pattern:
- Upload to a quarantine/ prefix in R2.
- Enqueue the object key to Cloudflare Queues from your Worker after upload.
- A consumer (another Worker or an off-Cloudflare service) fetches the object and scans it with your preferred malware scanning service.
- On success, move the object to a safe/ prefix or tag it with metadata scanned=true and allow it to be downloaded. On failure, delete it or keep it quarantined.
Example enqueue after upload:
// After successful env.UPLOADS.put(...)
await env.SCANNER_QUEUE.send({ key, when: Date.now() })
- Serving files safely
- Keep the bucket private. Serve via a Worker that authorizes each download and streams from R2.
- Set Content-Disposition to attachment for untrusted files to avoid in-browser execution.
- Set strict Content Security Policy on any page that renders user content.
- CORS and CSRF
- Set Access-Control-Allow-Origin to your exact frontend origin, not *. Avoid including credentials with wildcard origins.
- For same-site form posts, implement CSRF tokens or require same-origin requests with appropriate SameSite cookies.
- Logging and auditing
- Log who uploaded, when, size, type, and resulting key. Consider writing a row to D1 or KV with this metadata and your scanning status.
Checklist
- Validate size, type, and magic bytes before storage.
- Generate server-side keys; do not trust filenames.
- Use direct-to-R2 pre-signed URLs for large uploads.
- Keep R2 buckets private; serve through authorized Workers or signed URLs.
- Quarantine and scan before exposure.
- Add Turnstile and rate limiting to reduce abuse.
- Monitor and audit uploads.
With these patterns, your Cloudflare Workers app can accept user files confidently and at scale while minimizing risk and keeping performance high.