Explore
File Uploads in SaaS Products: Patterns and Pitfalls

File Uploads in SaaS Products: Patterns and Pitfalls

File upload sounds simple. In production, it's one of the most failure-prone features in SaaS applications. Here's how to build it properly.

File Uploads in SaaS Products: Patterns and Pitfalls

"Add file upload" appears on many feature lists. On the surface it seems straightforward. In practice, it involves more decisions than most founders expect: storage, access control, size limits, virus scanning, CDN delivery, cost management.

Here's a complete picture of how to build file uploads correctly in a Nuxt application.


Storage options

Option 1: Supabase Storage If you're already on Supabase, this is the easiest choice. You get a file API, signed URLs for private access, and an admin UI for managing buckets. It integrates directly with Supabase Auth for access control.

Option 2: AWS S3 (or compatible) S3 is the gold standard — it's extremely reliable, cheap at scale, and has the widest ecosystem support. Cloudflare R2 is S3-compatible with zero egress fees, making it cheaper for read-heavy use cases.

Option 3: Cloudinary or similar managed services Adds CDN, image transformation, and video processing on top of storage. Worth the cost if your product handles user-uploaded media that needs resizing or format conversion.

What not to do: Store files in your database. Store file paths in your database; store the files in object storage.


The upload flow

There are two patterns for uploading from a browser:

Pattern A: Client → Server → Storage The file goes through your server before reaching storage.

Browser → POST /api/upload → Your server → S3

Pros: You can validate, scan, and process files before they reach storage. Complete control. Cons: Your server handles the bandwidth. Large files can be a problem.

Pattern B: Presigned URLs (direct upload) Your server generates a presigned upload URL. The client uploads directly to storage.

Browser → GET /api/upload-url → Presigned URL
Browser → PUT presigned URL → S3 directly

Pros: Your server doesn't handle file bandwidth. Faster for large files. Cons: Validation happens after the file is already in storage.

For most SaaS applications, Pattern B (presigned URLs) is better. Here's the implementation in Nuxt:

// server/api/upload-url.post.ts
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
import { randomUUID } from 'crypto'

const s3 = new S3Client({
  region: process.env.AWS_REGION,
  credentials: {
    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!
  }
})

export default defineEventHandler(async (event) => {
  // Verify the user is authenticated
  const user = await requireAuth(event)

  const { filename, contentType } = await readBody(event)

  // Validate file type
  const allowedTypes = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf']
  if (!allowedTypes.includes(contentType)) {
    throw createError({ statusCode: 400, message: 'File type not allowed' })
  }

  const key = `uploads/${user.id}/${randomUUID()}/${filename}`

  const command = new PutObjectCommand({
    Bucket: process.env.S3_BUCKET,
    Key: key,
    ContentType: contentType,
    // Enforce file size limit (5MB)
    ContentLengthRange: [0, 5 * 1024 * 1024]
  })

  const uploadUrl = await getSignedUrl(s3, command, { expiresIn: 300 })

  return { uploadUrl, key }
})

Client-side upload composable:

// composables/useFileUpload.ts
export function useFileUpload() {
  const uploading = ref(false)
  const progress = ref(0)

  async function upload(file: File): Promise<string> {
    uploading.value = true
    progress.value = 0

    // Get presigned URL
    const { uploadUrl, key } = await $fetch('/api/upload-url', {
      method: 'POST',
      body: { filename: file.name, contentType: file.type }
    })

    // Upload directly to S3
    await fetch(uploadUrl, {
      method: 'PUT',
      body: file,
      headers: { 'Content-Type': file.type }
    })

    uploading.value = false
    return key
  }

  return { upload, uploading, progress }
}

Security considerations

Validate file types server-side, not just client-side. Client-side validation is UX — it can be bypassed. Check the MIME type and ideally the file magic bytes on the server.

Set size limits. Without limits, users can upload arbitrarily large files. Enforce this both in the presigned URL parameters and on your API route.

Never trust the filename. Generate a new UUID-based filename. User-provided filenames can contain path traversal attacks (../../etc/passwd).

Access control: Private user uploads should not be publicly accessible by URL. Use signed URLs with short expiry (15 minutes) to serve private content:

// server/api/file/[key].get.ts
export default defineEventHandler(async (event) => {
  const user = await requireAuth(event)
  const key = getRouterParam(event, 'key')

  // Verify this user owns the file
  const file = await db.files.findOne({ key, userId: user.id })
  if (!file) throw createError({ statusCode: 404 })

  // Generate a short-lived signed URL
  const url = await generateSignedDownloadUrl(key, 900) // 15 minutes
  return sendRedirect(event, url)
})

Handling upload UX

A few UX patterns that matter:

Progress indicator: For files over 1MB, show upload progress. The XMLHttpRequest progress event gives you percentage complete.

Drag and drop: Users expect to drag files. Use the dragover and drop events, and show a visual drop zone.

Instant preview: For image uploads, use URL.createObjectURL() to show a preview before the upload completes. The perceived wait time drops dramatically.

Error recovery: Uploads fail. Network drops. Files are too large. Each failure state needs a clear, actionable message and a way to retry.


The bits most products skip

Virus scanning (ClamAV or a managed service like Cloudflare's malware scanning) is worth adding for any platform where users upload documents to be shared with others.

Image optimisation (resizing, format conversion) should happen on upload for user-generated images, not on each request. Generate a set of sizes (thumbnail, medium, full) and store all three.

Need help building a production-ready file upload system? Let's talk →