In NestJS projects, getting a scheduled task to run is usually not the hard part.
What becomes difficult is making it run reliably after the service goes into production.
For example:
- the previous execution has not finished, but the next cron trigger already starts
- after scaling to multiple instances, the same task gets executed more than once
- a long-running job exceeds the lock TTL and the lock expires too early
- the task fails, but the logs do not clearly show whether the problem came from the task itself or from lock handling
Once these issues hit a real business flow, the result is often much more serious than a noisy log line. It can turn into:
- duplicate synchronization
- duplicate message delivery
- repeated aggregation or repeated processing
Because of these recurring issues, we built @raytonx/nest-scheduler. The goal was to preserve the familiar development experience of @nestjs/schedule while standardizing the parts of scheduled task execution that are easiest to get wrong.
What this module solves
@raytonx/nest-scheduler is not meant to replace @nestjs/schedule.
It is meant to extend it with capabilities that are much more useful in production:
- decorator-based scheduling on top of
@nestjs/schedule - task reentry skipping in a single process
- optional Redis-based distributed locking
- standardized task and lock lifecycle logs
In other words, it focuses on one practical question:
How do we make scheduled tasks more controllable, observable, and debuggable in both single-instance and multi-instance environments?
Why we chose to wrap it ourselves
Across multiple projects, we kept running into the same pattern:
- During development, the only thing being checked was whether the task could run at all, not whether it should be allowed to reenter
- After moving from a single instance to multiple instances, the same cron job started running on more than one node
- Even after adding Redis locking, the logs still did not make it easy to see whether a task had succeeded, been skipped, failed, or lost lock ownership during execution
If every project solves these problems on its own, the result is usually one of two things:
- every codebase grows a similar but slightly different locking implementation
- logs and failure behavior become inconsistent, which makes production debugging more expensive
That is why we prefer turning these conventions into a shared module, so business code can go back to focusing on the task itself.
Quick start
If your NestJS project is already using @nestjs/schedule, the integration is straightforward:
import { Module } from "@nestjs/common";
import { ScheduleModule } from "@nestjs/schedule";
import { SchedulerModule } from "@raytonx/nest-scheduler";
@Module({
imports: [
ScheduleModule.forRoot(),
SchedulerModule.forRoot({
isGlobal: true,
}),
],
})
export class AppModule {}
If the project runs across multiple instances and needs distributed mutual exclusion, install the Redis-related dependencies as well:
pnpm add @raytonx/nest-scheduler @nestjs/schedule
pnpm add @raytonx/nest-redis ioredis
Cron / Interval / Timeout usage
The module provides three decorators corresponding to scheduled task types:
DistributedCronDistributedIntervalDistributedTimeout
Example:
import { Injectable } from "@nestjs/common";
import {
DistributedCron,
DistributedInterval,
DistributedTimeout,
} from "@raytonx/nest-scheduler";
@Injectable()
export class JobsService {
@DistributedCron("0 * * * *")
async syncReport(): Promise<void> {
// do work
}
@DistributedInterval(10_000)
async syncMetrics(): Promise<void> {
// do work
}
@DistributedTimeout(5_000)
async warmup(): Promise<void> {
// do work
}
}
Default behavior includes:
- if the previous execution of the same task has not finished, the new trigger is skipped
- if Redis is not installed or not connected, the module falls back to an in-memory process lock
- if Redis is available, Redis distributed locking is preferred by default
- long-running jobs automatically renew the Redis lock by default
- task start, finish, success, failure, and skip logs are emitted by default
- lock renewal failures or lock ownership loss during execution produce dedicated error logs
The value of these defaults is simple: in many projects, teams do not want to reimplement lock handling and execution logging for every single scheduled task. They want a safer default behavior first, and then fine-tune only when necessary.
In-memory locking and Redis distributed locking
In a single-instance setup, an in-memory lock is often enough to solve the most common issue: task reentry.
But once the service runs on multiple instances, the situation changes.
For example, if two replicas trigger the same cron job at the same time and there is no distributed lock, duplicate execution becomes possible.
That is where Redis comes in:
import { Module } from "@nestjs/common";
import { ScheduleModule } from "@nestjs/schedule";
import { RedisModule } from "@raytonx/nest-redis";
import { SchedulerModule } from "@raytonx/nest-scheduler";
@Module({
imports: [
ScheduleModule.forRoot(),
RedisModule.forRoot({
isGlobal: true,
connections: [
{
host: "127.0.0.1",
port: 6379,
},
],
}),
SchedulerModule.forRoot({
isGlobal: true,
driver: "auto",
}),
],
})
export class AppModule {}
The driver rules are explicit:
auto: prefer Redis, fall back tomemoryif Redis is unavailableredis: require Redis locking explicitlymemory: always use only the in-process lock
This works well for different project stages:
- local development and single-instance environments can start with
memory - multi-instance deployments can switch to
autoorredis - tasks with especially strict execution consistency can force
redis
Why long-running jobs need lock renewal
A common misconception with scheduled jobs is:
once a Redis lock is added, duplicate execution is no longer a concern
In reality, if the task execution time exceeds the lock TTL, the lock may expire before the job is finished.
That means another instance may acquire the same lock and start executing the same job again.
@raytonx/nest-scheduler enables automatic lock extension by default:
SchedulerModule.forRoot({
lock: {
keyPrefix: "scheduler:",
ttl: 30_000,
retryAttempts: 0,
retryDelay: 200,
retryJitter: 50,
autoExtend: true,
extendInterval: 10_000,
},
logging: "default",
});
This is not about pretending scheduler execution can be made absolutely failure-proof.
It is about moving one of the most commonly overlooked risks in long-running jobs into the default behavior instead of leaving it to every business task to rediscover.
Why logs need to be standardized
When debugging scheduled jobs, the hardest part is often not whether logs exist.
It is whether the logs make it easy to understand what actually happened.
By default, the module emits structured JSON logs. Events fall into two categories:
- task events:
task_started,task_succeeded,task_failed,task_skipped,task_finished - lock events:
lock_acquired,lock_extended,lock_extend_failed,lock_expired_before_finish,lock_released
logging supports:
"default": emit default task logs and critical lock anomaly logs only"verbose": additionally emit lock acquisition, renewal, and release logsfalse: disable logging
The default-enabled events are:
task_startedtask_succeededtask_failedtask_skippedtask_finishedlock_extend_failedlock_expired_before_finish
If you need the full lock lifecycle, set logging to "verbose" to additionally emit:
lock_acquiredlock_extendedlock_released
The value of standardized logging is that when something goes wrong in production, you can separate these cases much faster:
- the task itself failed
- the task was intentionally skipped
- lock renewal failed
- the task lost lock ownership while still running
For example, when the Redis lock TTL expires before a task finishes, you will usually see:
lock_extend_failedlock_expired_before_finish- a following
task_failed - and finally
task_finished
That makes consistency problems in scheduled execution much easier to investigate.
Decorator-level override options
In addition to module-level defaults, individual tasks can override behavior with finer-grained options:
@DistributedCron("0 * * * *", {
name: "report-job",
lockKey: "jobs:report",
driver: "redis",
ttl: 60_000,
skipIfLocked: true,
logging: "verbose",
})
Summary
At its core, @raytonx/nest-scheduler adds a production-oriented execution safety layer on top of @nestjs/schedule:
- prevent task reentry in a single process
- support Redis distributed locking across multiple instances
- auto-renew locks for long-running jobs
- provide a unified log structure for task and lock events
If your NestJS project is already showing signs like these:
- scheduled jobs occasionally run more than once
- cron behavior becomes unstable after scaling
- it is hard to tell whether a scheduler issue came from business logic or locking
then moving those concerns into a shared execution module is usually safer than letting every task solve them separately.
Install it with:
pnpm add @raytonx/nest-scheduler @nestjs/schedule
If you need distributed mutual exclusion across multiple instances, also install:
pnpm add @raytonx/nest-redis ioredis
For many teams, the hard part of scheduled jobs is never “how do we write a cron expression.”
It is “how do we make scheduled execution behave reliably in a real production environment.”
That is exactly why we built this module.