- Payload CMS Jobs Queue Explained: Tasks, Jobs & Queues
Payload CMS Jobs Queue Explained: Tasks, Jobs & Queues
Understand Tasks, Workflows, Jobs, Queues, scheduling, and worker strategies to build reliable background processing…

Need Help Making the Switch?
Moving to Next.js and Payload CMS? I offer advisory support on an hourly basis.
Book Hourly AdvisoryRelated Posts:
If you have spent any time with Payload CMS and needed to offload work from your main request cycle, you have probably come across the Jobs Queue system. On paper, it sounds straightforward: define some tasks, queue some jobs, let them run in the background.
In practice, the relationship between Tasks, Workflows, Jobs, and Queues is not immediately obvious. The official documentation covers each concept individually, but the mental model for how they connect — and more importantly, when to use which — takes some piecing together.
I went through the docs end to end and built out that mental model. This article is the result. Not a step-by-step tutorial, but a clear explanation of what each piece does, how they relate to each other, and where they fit in a real Payload application.
The four concepts at a glance
Payload's Jobs Queue system has four core concepts, and each one operates at a different layer:
Tasks and Workflows define work. They describe what should happen — the logic, the input, the output, the retry behavior. A Task is a single operation. A Workflow is an ordered sequence of tasks that can recover from failure mid-sequence.
Jobs are instances of work. When you actually want a Task or Workflow to execute, you create a Job. That Job gets stored in the payload-jobs collection with its input data, status, and associated outputs.
Queues control how work is organized and executed. Every Job belongs to a queue. Workers pick up jobs from specific queues and run them on a schedule you define.
The lifecycle flows like this: you define Tasks and Workflows in your Payload config, create Jobs by calling payload.jobs.queue(), those Jobs land in the payload-jobs collection grouped by queue name, and then a worker process picks them up and executes the associated handler.
flowchart LR A[Define Tasks and Workflows] --> B[Create Jobs] B --> C[Jobs stored in payload-jobs collection] C --> D[Jobs grouped by Queue] D --> E[Worker runs jobs per queue] E --> F[Task or Workflow handlers execute] F -->|Success or Failure| C
That is the full picture. Now let's look at each piece in detail.
Tasks: the smallest unit of work
A Task is a function definition with typed input and output. You register Tasks in your Payload config under jobs.tasks, and each one gets a unique slug that identifies it across the system.
The handler contains the actual logic and must return { output: ... }. You can define inputSchema and outputSchema for validation and type generation. You can configure retry behavior, success and failure hooks, and concurrency controls.
Here is a simple Task definition:
// File: payload.config.ts
import { buildConfig } from 'payload'
export default buildConfig({
// ...
jobs: {
tasks: [
{
slug: 'sendWelcomeEmail',
retries: 3,
inputSchema: [
{ name: 'userEmail', type: 'email', required: true },
{ name: 'userName', type: 'text', required: true },
],
handler: async ({ input, req }) => {
await req.payload.sendEmail({
to: input.userEmail,
subject: 'Welcome!',
text: `Hi ${input.userName}, welcome!`,
})
return { output: { emailSent: true } }
},
},
],
},
})
This Task does one thing: send a welcome email. It has typed input, retry logic (up to 3 attempts), and a handler that returns a typed output. At this point, nothing is running — the Task is just a definition. It becomes operational only when a Job references it.
The rule of thumb for Tasks versus Workflows is simple. If you need a single operation, use a Task. If you have multiple steps where later steps depend on earlier ones and you want each step to be independently retryable, that is where Workflows come in.
Workflows: multi-step sequences with failure recovery
A Workflow combines multiple Tasks into an ordered sequence. The key feature is durability: if a task within a workflow fails, the entire workflow does not restart from scratch. It picks back up at the point of failure, and all previously completed tasks return their cached outputs without re-executing.
This matters in real scenarios. Consider user onboarding that involves creating a profile, sending a welcome email, and adding the user to a marketing list. Without a workflow, if the email service fails, you either re-run all three steps (wasteful and potentially unsafe) or manually track which steps succeeded. With a workflow, only the failed step retries.
// File: payload.config.ts
export default buildConfig({
jobs: {
tasks: [
// createProfile, sendWelcomeEmail, and addToMarketingList
// defined as individual tasks
],
workflows: [
{
slug: 'onboardUser',
inputSchema: [
{ name: 'userId', type: 'text', required: true },
],
handler: async ({ job, tasks }) => {
await tasks.createProfile('step-create-profile', {
input: { userId: job.input.userId },
})
await tasks.sendWelcomeEmail('step-send-email', {
input: { userId: job.input.userId },
})
await tasks.addToMarketingList('step-add-to-list', {
input: { userId: job.input.userId },
})
},
},
],
},
})
The string arguments passed to each task call ('step-create-profile', 'step-send-email') are stable task IDs. They identify each invocation so Payload can cache and restore outputs correctly across retries. If the email step fails and the workflow retries, createProfile immediately returns its cached output because its task ID matches a previously completed invocation. Only sendWelcomeEmail actually re-executes.
Workflows also support inline tasks through the inlineTask function, which lets you define one-off logic directly inside the workflow handler without registering a separate task in the config. This is useful for simple operations that do not need to be reused elsewhere.
// File: payload.config.ts — inside a workflow handler
handler: async ({ job, tasks, inlineTask }) => {
const post = await tasks.createPost('create-post', {
input: { title: job.input.title },
})
await inlineTask('update-post', {
task: async ({ req }) => {
const updatedPost = await req.payload.update({
collection: 'post',
id: post.id,
data: { title: 'Updated title' },
})
return { output: { updatedPost } }
},
})
}
Workflows can also define concurrency controls. If multiple jobs might operate on the same resource (say, syncing the same document), you can set a concurrency key that ensures only one job with that key runs at a time. This requires enabling jobs.enableConcurrencyControl: true in your Payload config.
// File: payload.config.ts — workflow with concurrency
{
slug: 'syncDocument',
concurrency: ({ input }) => `sync:${input.documentId}`,
handler: async ({ job, tasks }) => {
// Only one sync per document runs at a time
},
}
Jobs: where definitions become real
Tasks and Workflows are blueprints. Jobs bring them to life.
When you call payload.jobs.queue(), you create a Job — an instance of a Task or Workflow stored in the payload-jobs collection. Each Job carries its own ID, input data, status, queue assignment, and accumulated outputs.
Queuing a job for a single task:
const createdJob = await payload.jobs.queue({
task: 'sendWelcomeEmail',
input: {
userEmail: 'user@example.com',
userName: 'Alex',
},
})
Queuing a job for a workflow:
const createdJob = await payload.jobs.queue({
workflow: 'onboardUser',
input: { userId: '123' },
})
You can also specify which queue the job should land in and whether it should wait until a specific time before becoming eligible for execution (more on both of these shortly).
In practice, jobs are created from various places in your application. Collection hooks are one of the most common trigger points — for example, queuing a notification job whenever a post is published:
// File: collections/Posts.ts
{
slug: 'posts',
hooks: {
afterChange: [
async ({ req, doc, operation }) => {
if (operation === 'update' && doc.status === 'published') {
await req.payload.jobs.queue({
task: 'notifySubscribers',
input: { postId: doc.id },
})
}
},
],
},
}
You can also queue jobs from custom endpoints, server actions, or anywhere you have access to the Payload instance. The pattern is always the same: reference a task or workflow by slug, provide the input, and optionally assign a queue.
One thing to understand clearly: creating a Job does not execute it. The Job sits in the payload-jobs collection, waiting for a worker to pick it up. This separation between queuing and execution is fundamental to how the system works.
Queues: organizing and running jobs
A Queue is a named group of jobs that are executed in the order they were added. By default, every job goes into the "default" queue. You can create as many queues as you need by simply specifying a queue name when you queue a job.
await payload.jobs.queue({
task: 'sendPasswordReset',
input: { userId: '123' },
queue: 'critical',
})
That job now belongs to the critical queue instead of default. The name is arbitrary — you define it by using it.
The reason to use multiple queues is execution strategy. Different types of work have different urgency and resource requirements. You might want critical jobs processed every minute while batch processing runs once a day. Separating them into different queues lets you configure independent execution schedules.
Running jobs with workers
Queued jobs do not run themselves. You need a worker process that queries for jobs in a given queue and executes them. Payload offers four execution methods.
Bin script is the recommended approach for dedicated servers. You run it from the command line with options for queue selection, limits, and cron scheduling:
pnpm payload jobs:run --queue default --cron "*/5 * * * *"
autoRun is a configuration-based alternative that also works on dedicated servers. You define it in your Payload config, and it starts cron-based runners when Payload boots:
// File: payload.config.ts
jobs: {
autoRun: [
{ cron: '* * * * *', queue: 'critical', limit: 100 },
{ cron: '*/5 * * * *', queue: 'default', limit: 50 },
{ cron: '0 2 * * *', queue: 'batch', limit: 1000 },
],
}
Endpoint is the approach for serverless platforms like Vercel or Netlify, where long-running processes are not an option. You expose an endpoint (Payload provides /api/jobs/run by default) and trigger it from an external cron service.
Local API gives you programmatic control through payload.jobs.run(), useful for testing or custom orchestration.
Queue strategies
How you organize queues depends on what your application needs. A few common patterns:
Priority-based queues separate urgent work from background processing. Password resets go to critical and run every minute. General notifications go to default and run every five minutes. Report generation goes to batch and runs nightly.
Feature-based queues isolate different domains. Emails, image processing, and analytics each get their own queue with independent intervals and limits. This makes monitoring and scaling easier because you can see exactly where backlogs form.
Environment-based execution uses the shouldAutoRun option to control which servers actually process jobs. In a multi-server setup, you might only want one specific instance running job workers:
// File: payload.config.ts
jobs: {
autoRun: [
{
cron: '*/5 * * * *',
queue: 'default',
shouldAutoRun: () => process.env.ENABLE_JOB_WORKERS === 'true',
},
],
}
Common queue issues
A few things that trip people up. If you configure autoRun but no jobs ever run, the likely cause is that no jobs were ever queued — autoRun only executes existing jobs, it does not create them. If jobs appear in the payload-jobs collection but never execute, check that the queue name in the task's schedule matches the queue name in your runner configuration. And if everything works locally but fails on Vercel or Netlify, remember that autoRun requires a long-running server. On serverless, use the endpoint approach with an external cron.
Schedules and delayed execution
Beyond manually queuing jobs, Payload provides two timing mechanisms that serve different purposes.
Recurring schedules
The schedule property on a Task or Workflow automatically creates jobs on a cron schedule. This is for recurring work — daily reports, hourly syncs, weekly cleanup jobs.
// File: payload.config.ts
{
slug: 'generateDailyReport',
schedule: [
{
cron: '0 8 * * *', // Every day at 8:00 AM
queue: 'reports',
},
],
handler: async ({ req }) => {
// Report generation logic
return { output: { reportId: '123' } }
},
}
A critical detail: schedule only queues jobs. It does not run them. You still need a runner (either autoRun or a bin script) configured for the same queue. Both the schedule's queue value and the runner's queue value must match, otherwise jobs get created but never picked up.
Here is how schedule and autoRun work together:
// File: payload.config.ts
jobs: {
tasks: [
{
slug: 'generateReport',
schedule: [
{
cron: '0 0 * * *', // Queue a job every day at midnight
queue: 'nightly',
},
],
handler: async () => {
return { output: { reportId: '123' } }
},
},
],
autoRun: [
{
cron: '* * * * *', // Check the nightly queue every minute
queue: 'nightly',
},
],
}
The schedule creates a job at midnight. The autoRun runner checks the nightly queue every minute and executes any pending jobs it finds.
If you are using the bin script approach, you can combine scheduling and running in a single command:
pnpm payload jobs:run --cron "*/5 * * * *" --queue nightly --handle-schedules
The --handle-schedules flag tells the bin script to also handle the scheduling logic (creating jobs from schedule definitions), not just running existing jobs.
One-time delayed execution with waitUntil
For jobs that should run once at a specific future time, use waitUntil when queuing:
await payload.jobs.queue({
task: 'publishPost',
input: { postId: '123' },
waitUntil: new Date('2025-03-01T15:00:00Z'),
})
This creates a single job that becomes eligible for execution at the specified time. The worker will skip it until then.
The distinction is important: schedule is for recurring jobs created automatically. waitUntil is for a single future job queued manually. Use schedule for "every day at 8 AM" and waitUntil for "publish this post next Tuesday at 3 PM."
When to use which mechanism
Payload gives you four ways to trigger job creation, each for a different situation:
Schedule is for recurring automated work. Daily reports, weekly emails, hourly data syncs.
waitUntil is for one-time future events. Scheduled publishing, trial expiry emails, time-delayed notifications.
Collection hooks are for document-driven triggers. Send an email when a post is published, generate a PDF when an order is created.
Manual queuing is for user-initiated or API-driven work. A user clicks "Generate Report," or an external webhook triggers a processing job.
How it all connects
Now that each concept is clear individually, here is how they fit together as a system.
Tasks and Workflows sit at the definition layer. They describe what logic exists, what inputs and outputs it expects, and how it should behave on failure. At this layer, nothing is running — it is pure configuration.
Jobs sit at the instance layer. When you call payload.jobs.queue(), you bind a specific Task or Workflow to concrete input data and place it into a named queue. The Job is the unit that actually moves through the system.
Queues sit at the execution layer. They group jobs by name and determine when and how those jobs are processed. Workers (bin scripts, autoRun configs, endpoints) are scoped to specific queues with their own schedules and limits.
Schedules and waitUntil act as job creators — they put Jobs into Queues automatically based on time. But they never execute anything themselves. Execution always comes from a worker processing a queue.
This layered separation is what makes the system flexible. A Workflow does not know or care which queue its jobs end up in. A queue does not know whether it is processing simple tasks or complex workflows. And the execution strategy (how often, how many, on which server) is configured independently of the work definitions.
// File: payload.config.ts — a complete example showing all layers
export default buildConfig({
jobs: {
// Layer 1: Define the work
tasks: [
{
slug: 'processPayment',
retries: 5,
handler: async ({ input }) => {
// Payment logic
return { output: { success: true } }
},
},
{
slug: 'generateReport',
schedule: [{ cron: '0 2 * * *', queue: 'nightly' }],
handler: async () => {
// Report logic
return { output: { reportId: '456' } }
},
},
],
workflows: [
{
slug: 'onboardUser',
queue: 'onboarding',
handler: async ({ job, tasks }) => {
await tasks.createProfile('create-profile', {
input: { userId: job.input.userId },
})
await tasks.sendWelcomeEmail('send-email', {
input: { userId: job.input.userId },
})
},
},
],
// Layer 3: Execute the work
autoRun: [
{ cron: '* * * * *', queue: 'critical', limit: 100 },
{ cron: '*/5 * * * *', queue: 'default', limit: 50 },
{ cron: '*/2 * * * *', queue: 'onboarding', limit: 20 },
{ cron: '0 3 * * *', queue: 'nightly', limit: 5 },
],
},
})
In this configuration, payment jobs go to critical and run every minute. General work uses default every five minutes. The onboarding workflow has its own queue checked every two minutes. And the nightly report schedule creates a job at 2 AM that gets picked up by the nightly runner at 3 AM.
Layer 2 — the Jobs themselves — happens at runtime, whenever your application code calls payload.jobs.queue() or a schedule triggers.
Practical patterns worth knowing
A few patterns that come up repeatedly when working with this system.
Heavy jobs in a separate queue. If you have resource-intensive work like report generation or image processing, give it a dedicated queue with a low limit. This prevents heavy jobs from blocking time-sensitive work in other queues.
Concurrency-controlled workflows for document operations. If multiple events might trigger workflows that operate on the same document (say, multiple edits in quick succession each triggering a sync), use concurrency keys to ensure only one job per document runs at a time.
Serverless execution with external cron. On platforms like Vercel, autoRun will not work because there is no persistent server. Instead, expose the jobs endpoint and trigger it from an external cron service (Vercel Cron, GitHub Actions, or a simple cloud scheduler). The endpoint approach works the same way — it just needs something external to call it on a schedule.
Schedule plus autoRun alignment. This is worth repeating because it is the most common source of confusion: the queue name in your task's schedule must exactly match the queue name in your autoRun or bin script configuration. If they do not match, jobs get created and sit in the collection forever without being picked up.
Wrapping up
Payload's Jobs Queue system is more capable than it first appears. The separation between defining work (Tasks and Workflows), creating work instances (Jobs), and executing work (Queues and runners) gives you a lot of flexibility in how background processing is structured.
The mental model to keep in mind: Tasks and Workflows describe what to do. Jobs are the actual instances that carry input and state. Queues group those instances and determine when they run. Schedules create Jobs automatically, but never execute them — that is always the runner's responsibility.
Once that layered model clicks, the individual configuration options and patterns become much easier to reason about.
Let me know in the comments if you have questions, and subscribe for more practical development guides.
Thanks, Matija
📚 Comprehensive Payload CMS Guides
Detailed Payload guides with field configuration examples, custom components, and workflow optimization tips to speed up your CMS development process.


