Skip to main content

Documentation Index

Fetch the complete documentation index at: https://alphabet-06152314.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Workflows let you chain multiple jobs into ordered pipelines where each job can depend on the successful completion of one or more others. Instead of setting up external orchestration, you describe the dependency graph through the API and let the scheduler enforce execution order, stop or continue on failure, and skip runs during configured exclusion windows.

Creating a workflow

POST /api/v1/scheduler/workflows A workflow groups existing job definitions by reference. Each entry in the jobs array points to a job ID you have already created and declares which other jobs must finish before it starts.
{
  "name": "Nightly data pipeline",
  "jobs": [
    {
      "jobId": "a1b2c3d4-0000-0000-0000-000000000001",
      "dependsOn": [],
      "onFailure": "Stop"
    },
    {
      "jobId": "a1b2c3d4-0000-0000-0000-000000000002",
      "dependsOn": ["a1b2c3d4-0000-0000-0000-000000000001"],
      "onFailure": "Stop"
    },
    {
      "jobId": "a1b2c3d4-0000-0000-0000-000000000003",
      "dependsOn": ["a1b2c3d4-0000-0000-0000-000000000002"],
      "onFailure": "Continue"
    }
  ]
}
FieldDescription
nameA descriptive name for the workflow.
jobs[].jobIdThe ID of an existing job definition.
jobs[].dependsOnArray of job IDs that must complete before this job runs. Pass an empty array for the first job in the chain.
jobs[].onFailureStop halts the entire workflow when this job fails. Continue allows downstream jobs to proceed regardless.
A successful request returns a workflow identifier string that you can use for reference.
Each jobId in the workflow must refer to a job that already exists. Create the individual jobs with POST /api/v1/scheduler/jobs before creating the workflow that references them.

Job dependencies

You can also attach dependency rules to an individual job outside of a workflow. This is useful when a job should only run after certain other jobs have succeeded, without formalising a named workflow. POST /api/v1/scheduler/jobs/{jobId}/dependencies
{
  "dependsOnJobIds": [
    "a1b2c3d4-0000-0000-0000-000000000001",
    "a1b2c3d4-0000-0000-0000-000000000002"
  ],
  "condition": "AllSucceeded"
}
FieldDescription
dependsOnJobIdsArray of job IDs that must satisfy the condition before this job executes.
conditionAllSucceeded — every listed job must have run successfully. AnySucceeded — at least one listed job must have succeeded.
Use AllSucceeded when the job requires data or state produced by each dependency. Use AnySucceeded when the job can proceed as long as any one upstream source is available.

Exclusion rules

Exclusion rules prevent a job from running on specific dates, days of the week, or during a daily time window. They are useful for honouring holidays, avoiding weekends, or protecting maintenance windows. POST /api/v1/scheduler/jobs/{jobId}/exclusions
{
  "excludedDates": ["2025-12-25", "2026-01-01"],
  "excludedDaysOfWeek": ["Saturday", "Sunday"],
  "timeRange": {
    "start": "22:00",
    "end": "06:00"
  }
}
FieldDescription
excludedDatesISO 8601 dates (yyyy-MM-dd) on which the job must not run.
excludedDaysOfWeekDay names (Monday through Sunday) to skip every week.
timeRange.startStart of the daily blocked window in HH:mm (24-hour, UTC).
timeRange.endEnd of the daily blocked window in HH:mm. Use a time earlier than start to express an overnight window (e.g. 22:00 to 06:00).
All three fields are optional — supply only the ones that apply to your situation.

Retries and timeouts

Set retry and timeout behaviour per job in the retryPolicy and timeoutSeconds fields of the job definition. These override the global defaults from Scheduler.Retry configuration.
{
  "retryPolicy": {
    "maxAttempts": 5,
    "delaySeconds": 120
  },
  "timeoutSeconds": 600
}
FieldDefaultDescription
retryPolicy.maxAttempts3Maximum number of attempts before the execution is marked Failed.
retryPolicy.delaySeconds60Wait time between retry attempts in seconds.
timeoutSeconds300Maximum wall-clock time a single execution may run before the scheduler cancels it.
Every retry attempt is recorded as a separate entry in the execution history so you can inspect which attempt failed and why. After all attempts are exhausted the job’s execution status moves to Failed and the job is included in the failed-jobs list at GET /api/v1/scheduler/jobs/failed.

Worked example: a 3-job pipeline

The following example shows a complete pipeline: a data-fetch job runs first, a processing job runs only after the fetch succeeds, and a notification job runs after processing (but the workflow continues even if the notification fails). Step 1: create the three jobs Create each job individually. Below is the fetch job — repeat the pattern for the others, adjusting name, jobType, and jobConfiguration as needed.
POST /api/v1/scheduler/jobs

{
  "name": "Fetch source data",
  "jobType": "HttpCall",
  "scheduleType": "Cron",
  "scheduleExpression": "0 1 * * *",
  "jobConfiguration": {
    "url": "https://your-datasource.example.com/api/extract",
    "method": "POST",
    "headers": { "Content-Type": "application/json" },
    "body": "{\"date\":\"{{Today}}\"}",
    "timeoutSeconds": 60
  },
  "retryPolicy": { "maxAttempts": 3, "delaySeconds": 60 },
  "isEnabled": true,
  "createdBy": "pipeline-setup"
}
Step 2: create the workflow Once you have the IDs returned from the three create calls, compose them into a workflow:
POST /api/v1/scheduler/workflows

{
  "name": "Nightly ETL pipeline",
  "jobs": [
    {
      "jobId": "aaaaaaaa-0000-0000-0000-000000000001",
      "dependsOn": [],
      "onFailure": "Stop"
    },
    {
      "jobId": "bbbbbbbb-0000-0000-0000-000000000002",
      "dependsOn": ["aaaaaaaa-0000-0000-0000-000000000001"],
      "onFailure": "Stop"
    },
    {
      "jobId": "cccccccc-0000-0000-0000-000000000003",
      "dependsOn": ["bbbbbbbb-0000-0000-0000-000000000002"],
      "onFailure": "Continue"
    }
  ]
}
In this configuration:
  • The fetch job runs first. If it fails, the workflow stops immediately (onFailure: "Stop").
  • The processing job runs only after the fetch job succeeds. A failure here also halts the workflow.
  • The notification job runs after processing. If it fails, the workflow still records the run as complete rather than blocking future runs (onFailure: "Continue").
Keep individual jobs reusable across multiple workflows. A single StoredProcedure or CodeExecution job can participate in different workflows by being referenced from more than one workflow definition.