- Payload Jobs Queue on Vercel: Complete Production Setup
Payload Jobs Queue on Vercel: Complete Production Setup
How to enqueue jobs from hooks, run them via Vercel Cron, secure /api/payload-jobs/run, and harden tasks with retries

Need Help Making the Switch?
Moving to Next.js and Payload CMS? I offer advisory support on an hourly basis.
Book Hourly AdvisoryRelated Posts:
If you are doing long-running work inside Payload hooks, you are making every request slower and less reliable. That is true even if you try the “non-blocking hook” pattern, because fire-and-forget is not durability. It is simply “Payload does not wait”, and on serverless it is especially easy for that work to be interrupted.
Payload v3.70+ includes a first-party Jobs Queue with Tasks, Jobs, Queues, and Workflows, plus multiple execution methods including a built-in /api/payload-jobs/run endpoint. This article shows the production-grade pattern for Vercel: enqueue jobs from hooks, execute them via Vercel Cron hitting /api/payload-jobs/run, secure the endpoint, and harden tasks with retries, concurrency, and observability.
Everything below is based on the current Jobs Queue and Hooks docs you referenced, plus the v3.76 concurrency updates you called out.
The mental model: hooks enqueue, workers execute
Hooks are part of your request lifecycle. They should stay fast and predictable.
Jobs are your durable background work. They live in your database (in the payload-jobs collection), can retry, can be scheduled, and have status and logs you can inspect.
So the “correct” architecture is:
- A request comes in
- An
afterChangehook runs - The hook queues a job (fast DB insert)
- A runner executes jobs later (cron-triggered on serverless)
On Vercel, the runner is your cron calling /api/payload-jobs/run.
Why “non-blocking hooks” are not enough
Payload supports non-blocking hooks in the sense that if a hook does not return a Promise, Payload will not await it.
That does not give you:
- Durability (work can be lost if the process ends)
- Retries
- Backpressure and concurrency control
- Visibility into failures
- A unified place to inspect “what happened”
If you care about reliability, you want a record of the work to exist even if your server restarts. That is exactly what the Jobs Queue gives you.
The moving pieces in Payload Jobs Queue
Payload’s Jobs Queue is made up of:
- Tasks: definitions of background work (slug, handler, retries, schedule, concurrency)
- Jobs: individual queued instances of a task or workflow, stored in
payload-jobs - Queues: named lanes for jobs (default is
default) - Workflows: multi-step sequences of tasks (optional)
For most apps, you will start with Tasks plus Jobs plus a couple of Queues.
The Vercel production pattern
On Vercel you generally do not have a long-running process, so you do not use autoRun. Instead:
- Enqueue jobs from hooks or endpoints using
req.payload.jobs.queue(...) - Add a Vercel Cron that calls
/api/payload-jobs/run - Secure
/api/payload-jobs/runusingCRON_SECRETinjobs.access.run
Step 1: define a task
Example: send a welcome email after a user is created.
Create a task definition (structure may vary slightly depending on how you organize config, but the core idea is consistent):
// src/tasks/sendWelcomeEmail.ts
import type { TaskConfig } from 'payload'
export const sendWelcomeEmail: TaskConfig = {
slug: 'sendWelcomeEmail',
retries: 3,
handler: async ({ input, req }) => {
const { userId } = input as { userId: string }
const user = await req.payload.findByID({
collection: 'users',
id: userId,
})
// Call your email provider here
// Keep this handler idempotent if possible
// Example: only send if user.welcomeEmailSentAt is not set
await req.payload.update({
collection: 'users',
id: userId,
data: { welcomeEmailSentAt: new Date().toISOString() },
})
return { ok: true }
},
}
Notes:
- Keep tasks idempotent. Retries mean the handler can run more than once.
- Prefer writing a “sentAt” marker or using an idempotency key with your email provider.
Step 2: enqueue the task from an afterChange hook
In your users collection:
// src/collections/Users.ts
import type { CollectionConfig } from 'payload'
export const Users: CollectionConfig = {
slug: 'users',
hooks: {
afterChange: [
async ({ doc, operation, req }) => {
if (operation !== 'create') return
// Queue job and wait for the DB insert
// This keeps the request fast but durable
await req.payload.jobs.queue({
task: 'sendWelcomeEmail',
input: { userId: doc.id },
queue: 'emails',
// Optional: add a log entry for traceability
log: [{ message: `Queued welcome email for user ${doc.id}` }],
req,
})
},
],
},
fields: [
// ...
],
}
This is the sweet spot:
- The request waits only long enough to insert a job record
- The expensive work happens later
- You get retries, logs, and status
Step 3: add a Vercel Cron to run jobs
Create vercel.json:
{
"crons": [
{ "path": "/api/payload-jobs/run?queue=emails&limit=25", "schedule": "*/1 * * * *" },
{ "path": "/api/payload-jobs/run?queue=default&limit=50", "schedule": "*/5 * * * *" }
]
}
This runs the emails queue every minute (small batch) and the default queue every 5 minutes (larger batch).
You can tune:
limitto control runtime per invocation- schedule frequency to control latency and cost
- separate queues to isolate workloads
Step 4: secure /api/payload-jobs/run
Set a CRON_SECRET environment variable in Vercel. Then lock down job running in your Payload config:
// payload.config.ts
import type { PayloadConfig } from 'payload'
export default {
// ...other config
jobs: {
access: {
run: ({ req }) => {
// Allow authenticated admins to manually run jobs if you want
if (req.user) return true
const secret = process.env.CRON_SECRET
if (!secret) return false
const authHeader = req.headers.get('authorization')
return authHeader === `Bearer ${secret}`
},
},
},
} satisfies PayloadConfig
This gives you:
- Cron can run jobs
- Random internet traffic cannot
- Admins can optionally run jobs manually when logged in (if you keep that clause)
Hardening: retries, idempotency, concurrency, supersedes, waitUntil
Retries and idempotency
Retries are great, but only if the task can safely re-run.
Practical idempotency strategies:
- Write a “completed marker” to your document (
welcomeEmailSentAt,indexedAt, etc.) - Include an idempotency key in job input and enforce uniqueness in your domain logic
- Use provider-level idempotency keys where available
Concurrency control
If multiple jobs target the same resource (for example, re-index a post after each edit), you want to prevent parallel work.
Use a concurrency key that groups jobs by a stable identifier like collection:docId.
Conceptual example:
export const reindexPost: TaskConfig = {
slug: 'reindexPost',
retries: 5,
// concurrency is typically configured so jobs sharing the same key do not run in parallel
concurrency: ({ input }) => `posts:${(input as any).postId}`,
handler: async ({ input, req }) => {
const { postId } = input as { postId: string }
// do indexing work
return { postId }
},
}
Supersedes: “last queued wins”
In v3.76, Payload adds a “supersedes” option for concurrency control. The intent is: if a new job arrives with the same concurrency key, older pending jobs can be removed so only the latest runs.
This is perfect for:
- search indexing
- image reprocessing
- cache rebuilding per document
Use it when doing every intermediate job is wasted work.
Delayed execution with waitUntil
If you need “run this later” without inventing your own scheduler, queue a job with waitUntil.
Example use cases:
- “Send follow-up email 2 days after signup”
- “Recheck payment status in 15 minutes”
- “Run cleanup tonight”
Conceptual enqueue:
await req.payload.jobs.queue({
task: 'sendFollowUp',
input: { userId: doc.id },
waitUntil: new Date(Date.now() + 2 * 24 * 60 * 60 * 1000),
queue: 'emails',
req,
})
Scheduling recurring work
For “nightly sync” or “every hour cleanup”, use task scheduling (cron expressions) and ensure you have a runner invoking job execution.
On Vercel, the runner is still your Vercel cron calling /api/payload-jobs/run. Scheduled tasks create jobs, but something still needs to execute them.
Observability: your dashboard is payload-jobs
Because jobs are stored in your database, you can monitor them with normal queries.
Useful fields to pay attention to:
hasErrortotalTriedprocessingcompletedAtlog
Simple operational patterns:
- Count pending jobs by queue to detect backlog
- Alert on jobs with
hasError = true - Alert on “processing too long” if you see jobs stuck processing
Add structured log messages when queuing and inside tasks so you can trace a job back to:
- the document ID
- the user action that triggered it
- a correlation ID from headers (optional)
Common failure modes and quick fixes
Jobs are not running at all
- Your Vercel cron is not configured, disabled, or pointing at the wrong path
- The endpoint is blocked by access control because
CRON_SECRETis missing or mismatched
You queued jobs but nothing processes them
- Queue name mismatch: you are enqueueing to
emailsbut your cron is runningdefault - Your cron calls
/api/payload-jobs/runwithout the correct query params
You used autoRun on Vercel
- That is for dedicated servers, not serverless. Use the endpoint method.
Jobs are “pending” but should be delayed
- Check
waitUntil. Jobs scheduled into the future will not run until that time.
📚 Comprehensive Payload CMS Guides
Detailed Payload guides with field configuration examples, custom components, and workflow optimization tips to speed up your CMS development process.


