- Deploy Payload CMS with Next.js 16: Self-Hosted Guide
Deploy Payload CMS with Next.js 16: Self-Hosted Guide
Why Vercel's serverless model breaks for Payload and a practical self-hosted deployment using Docker, Nginx, and…

Need Help Making the Switch?
Moving to Next.js and Payload CMS? I offer advisory support on an hourly basis.
Book Hourly AdvisoryRelated Posts:
If you're running Payload CMS with Next.js 16, you've probably wondered whether to deploy on Vercel or self-host. The generic "Vercel vs self-hosted" debate doesn't really apply here — Payload changes the calculus in ways that most deployment guides don't account for.
I've been deploying Payload applications for a while now and the pattern is consistent: what works fine for a headless Next.js app starts creating real problems once Payload enters the picture. This guide walks through exactly why, and what a solid self-hosted setup looks like in practice.
Why Payload and Vercel's Serverless Model Conflict
Vercel runs your Next.js application as serverless functions. Each request spins up a function, handles the request, and the function is torn down. For stateless rendering this works well. For Payload, it creates a fundamental mismatch.
Payload initializes on startup. It connects to your database, builds its collections, registers hooks, and establishes the connection pool. On a persistent server, this happens once. On Vercel's serverless model, this initialization happens on every cold start — and cold starts happen more than you'd expect.
The problem compounds with Payload's ORM layer. Payload 3.x uses Drizzle under the hood with a connection pool sized for persistent server behavior. On serverless, you're creating and destroying database connections at a rate the pool wasn't designed for. With enough traffic, you'll hit connection limits on your Postgres instance before you hit any meaningful compute ceiling.
There's also a practical ceiling on Vercel's serverless functions: execution timeout. Payload's admin operations — bulk imports, collection migrations, media processing — regularly exceed what serverless timeouts allow. Your admin panel becomes unreliable in ways that are hard to debug until you understand the root cause.
The Edge Runtime Problem
Next.js 16 introduces Proxy (formerly middleware) running on the Edge Runtime by default. The Edge Runtime is a restricted environment — it supports only a subset of Node.js APIs. Payload's server code requires the full Node.js runtime.
This means you cannot run any Payload logic inside a Proxy handler. If you're using Payload to gate content behind authentication, you need to be careful about where that authentication check lives. A common mistake is trying to query Payload's local API inside a Proxy handler — it will fail because the Edge Runtime doesn't support the Node.js APIs Payload depends on.
There's also an open bug (GitHub #86122) where proxy.ts doesn't execute at all in certain standalone + reverse proxy configurations behind Cloudflare or Traefik. The workaround is staying on middleware.ts until it's resolved. Worth knowing before you architect your auth flow around the new Proxy convention.
What "All Features Supported" Actually Means
The Next.js deployment docs say Docker containers support "All" Next.js features. That's accurate for Next.js features. It says nothing about Payload.
Payload's file storage and media uploads require writable disk or an S3-compatible adapter. Vercel's serverless functions run in read-only filesystems. If you're using Payload's local disk storage — even in development — you need to migrate to an S3 adapter before deploying to Vercel. The S3 adapter itself is straightforward, but it's a forced dependency that self-hosting doesn't impose on you.
Payload's built-in email functionality relies on being able to configure a transport at initialization. On serverless, the initialization lifecycle is unpredictable. On a persistent Node server, you configure it once and it works.
The Self-Hosted Reference Architecture
The setup that works reliably for Payload + Next.js 16 is output: "standalone" behind a reverse proxy, with Postgres on a managed instance and optional Redis for shared caching.
Next.js Configuration
// File: next.config.ts
import { withPayload } from '@payloadcms/next/withPayload'
const nextConfig = {
output: 'standalone',
}
export default withPayload(nextConfig)
The standalone output traces your dependencies and produces a minimal bundle in .next/standalone. For a Payload application this matters more than a typical Next.js app — Payload pulls in a significant dependency tree and standalone mode strips it down to what actually gets used. Fly.io reports ~400MB image size reduction with standalone output; for Payload apps the savings are in a similar range.
Multi-Stage Dockerfile
# File: Dockerfile FROM node:20-alpine AS base FROM base AS deps RUN apk add --no-cache libc6-compat WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci FROM base AS builder WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY . . RUN npm run build FROM base AS runner WORKDIR /app ENV NODE_ENV=production RUN addgroup --system --gid 1001 nodejs RUN adduser --system --uid 1001 nextjs COPY --from=builder /app/public ./public COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static # Sharp for image optimization RUN npm install sharp USER nextjs EXPOSE 3000 ENV PORT=3000 ENV HOSTNAME="0.0.0.0" CMD ["node", "server.js"]
A few things worth explaining here. The libc6-compat package on Alpine is needed for certain native dependencies Payload may pull in. Sharp gets installed in the runner stage specifically — a common mistake is installing it in the builder stage only and discovering it missing at runtime with the error 'sharp' is required to be installed in standalone mode for the image optimization to function correctly. The runner stage runs as a non-root user, which matters when you have a reverse proxy handling TLS in front.
Nginx Configuration
# File: /etc/nginx/sites-available/your-app server { listen 80; server_name yourdomain.com; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name yourdomain.com; ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem; # Static assets - long cache location /_next/static/ { proxy_pass http://127.0.0.1:3000; proxy_cache_valid 200 1y; add_header Cache-Control "public, max-age=31536000, immutable"; } # Payload admin - no caching location /admin { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 120s; } # Payload API location /api { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; limit_req zone=api burst=20 nodelay; limit_req_status 429; } # Everything else location / { proxy_pass http://127.0.0.1:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
The important detail is the separate location blocks. Your Payload admin at /admin gets a longer proxy_read_timeout (120 seconds) because admin operations — bulk actions, migrations, media uploads — can run long. Rate limiting is applied at the /api location rather than globally. Note that if you're using Server Actions, they POST to / in the App Router — your rate limiting config needs to account for that separately if you're protecting those endpoints.
Static assets at /_next/static/ get immutable cache headers because Next.js fingerprints these files. They never change at the same URL, so a one-year cache is safe and meaningfully reduces origin load.
Payload Environment Configuration
// File: payload.config.ts
import { postgresAdapter } from '@payloadcms/db-postgres'
import { s3Storage } from '@payloadcms/storage-s3'
import { buildConfig } from 'payload'
export default buildConfig({
db: postgresAdapter({
pool: {
connectionString: process.env.DATABASE_URL,
// Size your pool for persistent server behavior
max: 10,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
},
}),
plugins: [
s3Storage({
collections: {
media: true,
},
bucket: process.env.S3_BUCKET,
config: {
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY,
},
region: process.env.S3_REGION,
endpoint: process.env.S3_ENDPOINT, // for R2 or other S3-compatible
},
}),
],
// ... rest of your config
})
The connection pool settings matter on a persistent server. With max: 10 and a proper idleTimeoutMillis, your pool stays healthy under sustained load and cleans up idle connections before they hit Postgres limits. On serverless this configuration would be irrelevant — you'd have no control over how many function instances were running simultaneously. On a persistent server, you're sizing it once and it stays sized correctly.
S3 storage is listed here not because self-hosting requires it — it doesn't, you can use local disk — but because it's the right call for production regardless of hosting model. Local disk storage breaks the moment you scale to more than one instance or need to persist uploads through a container restart.
Database Considerations
Payload is a database-heavy application. The admin panel issues queries constantly — collection list views, relationship fields, live preview. Your Postgres instance placement matters.
On Vercel, the recommended pairing is Vercel Postgres (Neon under the hood) or another serverless Postgres provider. These are optimized for the connection-per-request pattern of serverless functions. On a persistent VPS, you want a traditional Postgres instance — either on the same server, on a managed instance in the same datacenter (Hetzner, DigitalOcean managed databases), or a provider like Neon configured for pooling.
The latency difference between a Postgres instance in the same datacenter and one across regions shows up immediately in Payload's admin panel. If your server is in Hetzner Frankfurt and your database is in AWS us-east-1, you'll feel it on every admin page load.
When Vercel Does Make Sense
There's one scenario where Vercel is a reasonable choice even with Payload: when you're separating Payload from the Next.js frontend entirely and running Payload as a standalone Node server.
In this architecture, Payload runs as its own Node process — either self-hosted or on a platform like Railway or Render — and your Next.js frontend on Vercel fetches from Payload's REST or GraphQL API at build time and runtime. The frontend is genuinely stateless from Payload's perspective. This is the "headless Payload" pattern and it sidesteps all the serverless incompatibilities.
The tradeoff is operational complexity: you're now running two separate deployments and managing the API relationship between them. For content-heavy sites with a clear editorial workflow, this can make sense. For most applications, it adds overhead without a compelling benefit.
The Cost Picture
A Hetzner CX32 (4 vCPUs, 8 GB RAM) costs around €6.80/month. That runs a Payload + Next.js 16 production application comfortably for most traffic levels, with Nginx in front and Postgres either on the same machine or a small managed instance.
Vercel Pro starts at $20/month, then adds usage-based charges for Edge Requests, bandwidth, and image optimization. Next.js 16's increased prefetching behavior is confirmed to spike Edge Request counts — Vercel's own staff confirmed this in a support thread, framing the increase as expected behavior from enhanced prefetching. If you're running Payload's admin panel, your editors are generating real requests every time they navigate a collection. That adds up.
The honest comparison isn't $7 vs $20. It's $7 in infrastructure plus a few hours of initial setup versus $20+ in infrastructure plus the ongoing constraints of working around serverless limitations for an application that was designed to run on a persistent server.
Wrapping Up
Payload CMS was built to run as a persistent Node.js server. Its initialization lifecycle, database connection pooling, file storage behavior, and admin operations all reflect that design. Serverless hosting models work against those assumptions in ways that create real problems at production scale — cold start penalties, connection pool exhaustion, timeout failures in the admin panel, and Edge Runtime incompatibility for anything touching Payload's local API.
The self-hosted setup — standalone output, multi-stage Docker build, Nginx reverse proxy, and S3-compatible media storage — is not complicated to get right and it matches how Payload was designed to run. The main things to get right are Sharp in your runner stage, the extended proxy timeout for admin routes, and S3 storage for media.
Let me know in the comments if you have questions about any part of the setup, and subscribe if you want more practical Payload CMS implementation guides.
Thanks, Matija
📚 Comprehensive Payload CMS Guides
Detailed Payload guides with field configuration examples, custom components, and workflow optimization tips to speed up your CMS development process.


