saladVersion: v1.1 $base: "https://w3id.org/cwl/cwl#" $namespaces: cwl: "https://w3id.org/cwl/cwl#" rdfs: "http://www.w3.org/2000/01/rdf-schema#" $graph: - name: "WorkflowDoc" type: documentation doc: - | # Common Workflow Language (CWL) Workflow Description, v1.2.0-dev3 This version: * https://w3id.org/cwl/v1.2.0-dev3/ Current version: * https://w3id.org/cwl/ - "\n\n" - {$include: contrib.md} - "\n\n" - | # Abstract This specification defines the Common Workflow Language (CWL) Workflow description, a vendor-neutral standard for representing analysis tasks where a sequence of operations are described using a directed graph of operations to transform input to output. CWL is portable across a variety of computing platforms. - {$include: intro.md} - | ## Introduction to CWL Workflow standard v1.2.0-dev3 This specification represents the latest stable release from the CWL group. Since the v1.1 release, v1.2.0-dev3 introduces the following updates to the CWL Workflow standard. Documents should to use `cwlVersion: v1.2.0-dev3` to make use of new syntax and features introduced in v1.2.0-dev3. Existing v1.1 documents should be trivially updatable by changing `cwlVersion`, however CWL documents that relied on previously undefined or underspecified behavior may have slightly different behavior in v1.2.0-dev3. ## Changelog * Adds `when` field to [WorkflowStep](#WorkflowStep) for conditional execution * Adds `pickValue` field to [WorkflowStepInput](#WorkflowStepInput) and [WorkflowOutputParameter](#WorkflowOutputParameter) for selecting among null and non-null source values * Add abstract [Operation](#Operation) that can be used as a no-op stand-in to describe abstract workflows. * [Workflow](#Workflow), [ExpressionTool](#ExpressionTool) and [Operation](#Operation) can now express `intent` with an identifier for the type of computational operation. See also the [CWL Command Line Tool Description, v1.2.0-dev3 changelog](CommandLineTool.html#Changelog). ## Purpose The Common Workflow Language Command Line Tool Description express workflows for data-intensive science, such as Bioinformatics, Chemistry, Physics, and Astronomy. This specification is intended to define a data and execution model for Workflows that can be implemented on top of a variety of computing platforms, ranging from an individual workstation to cluster, grid, cloud, and high performance computing systems. Details related to execution of these workflow not laid out in this specification are open to interpretation by the computing platform implementing this specification. - {$include: concepts.md} - name: ExpressionToolOutputParameter type: record extends: OutputParameter fields: - name: type type: - CWLType - OutputRecordSchema - OutputEnumSchema - OutputArraySchema - string - type: array items: - CWLType - OutputRecordSchema - OutputEnumSchema - OutputArraySchema - string jsonldPredicate: "_id": "sld:type" "_type": "@vocab" refScope: 2 typeDSL: True doc: | Specify valid types of data that may be assigned to this parameter. - name: WorkflowInputParameter type: record extends: InputParameter docParent: "#Workflow" fields: - name: type type: - CWLType - InputRecordSchema - InputEnumSchema - InputArraySchema - string - type: array items: - CWLType - InputRecordSchema - InputEnumSchema - InputArraySchema - string jsonldPredicate: "_id": "sld:type" "_type": "@vocab" refScope: 2 typeDSL: True doc: | Specify valid types of data that may be assigned to this parameter. - name: inputBinding type: InputBinding? doc: | Deprecated. Preserved for v1.0 backwards compatability. Will be removed in CWL v2.0. Use `WorkflowInputParameter.loadContents` instead. jsonldPredicate: "cwl:inputBinding" - type: record name: ExpressionTool extends: Process specialize: - specializeFrom: InputParameter specializeTo: WorkflowInputParameter - specializeFrom: OutputParameter specializeTo: ExpressionToolOutputParameter documentRoot: true doc: | An ExpressionTool is a type of Process object that can be run by itself or as a Workflow step. It executes a pure Javascript expression that has access to the same input parameters as a workflow. It is meant to be used sparingly as a way to isolate complex Javascript expressions that need to operate on input data and produce some result; perhaps just a rearrangement of the inputs. No Docker software container is required or allowed. fields: - name: class jsonldPredicate: "_id": "@type" "_type": "@vocab" type: string - name: expression type: Expression doc: | The expression to execute. The expression must return a JSON object which matches the output parameters of the ExpressionTool. - name: LinkMergeMethod type: enum docParent: "#WorkflowStepInput" doc: The input link merge method, described in [WorkflowStepInput](#WorkflowStepInput). symbols: - merge_nested - merge_flattened - name: PickValueMethod type: enum docParent: "#WorkflowStepInput" doc: | Picking non-null values among inbound data links, described in [WorkflowStepInput](#WorkflowStepInput). symbols: - first_non_null - the_only_non_null - all_non_null - name: WorkflowOutputParameter type: record extends: OutputParameter docParent: "#Workflow" doc: | Describe an output parameter of a workflow. The parameter must be connected to one or more parameters defined in the workflow that will provide the value of the output parameter. It is legal to connect a WorkflowInputParameter to a WorkflowOutputParameter. See [WorkflowStepInput](#WorkflowStepInput) for discussion of `linkMerge` and `pickValue`. fields: - name: outputSource doc: | Specifies one or more workflow parameters that supply the value of to the output parameter. jsonldPredicate: "_id": "cwl:outputSource" "_type": "@id" refScope: 0 type: - string? - string[]? - name: linkMerge type: ["null", LinkMergeMethod] jsonldPredicate: "cwl:linkMerge" default: merge_nested doc: | The method to use to merge multiple sources into a single array. If not specified, the default method is "merge_nested". - name: pickValue type: ["null", PickValueMethod] jsonldPredicate: "cwl:pickValue" doc: | The method to use to choose non-null elements among multiple sources. - name: type type: - CWLType - OutputRecordSchema - OutputEnumSchema - OutputArraySchema - string - type: array items: - CWLType - OutputRecordSchema - OutputEnumSchema - OutputArraySchema - string jsonldPredicate: "_id": "sld:type" "_type": "@vocab" refScope: 2 typeDSL: True doc: | Specify valid types of data that may be assigned to this parameter. - name: Sink type: record abstract: true fields: - name: source doc: | Specifies one or more workflow parameters that will provide input to the underlying step parameter. jsonldPredicate: "_id": "cwl:source" "_type": "@id" refScope: 2 type: - string? - string[]? - name: linkMerge type: LinkMergeMethod? jsonldPredicate: "cwl:linkMerge" default: merge_nested doc: | The method to use to merge multiple inbound links into a single array. If not specified, the default method is "merge_nested". - name: pickValue type: ["null", PickValueMethod] jsonldPredicate: "cwl:pickValue" doc: | The method to use to choose non-null elements among multiple sources. - type: record name: WorkflowStepInput extends: [Identified, Sink, LoadContents, Labeled] docParent: "#WorkflowStep" doc: | The input of a workflow step connects an upstream parameter (from the workflow inputs, or the outputs of other workflows steps) with the input parameters of the process specified by the `run` field. Only input parameters declared by the target process will be passed through at runtime to the process though additonal parameters may be specified (for use within `valueFrom` expressions for instance) - unconnected or unused parameters do not represent an error condition. # Input object A WorkflowStepInput object must contain an `id` field in the form `#fieldname` or `#prefix/fieldname`. When the `id` field contains a slash `/` the field name consists of the characters following the final slash (the prefix portion may contain one or more slashes to indicate scope). This defines a field of the workflow step input object with the value of the `source` parameter(s). # Merging multiple inbound data links To merge multiple inbound data links, [MultipleInputFeatureRequirement](#MultipleInputFeatureRequirement) must be specified in the workflow or workflow step requirements. If the sink parameter is an array, or named in a [workflow scatter](#WorkflowStep) operation, there may be multiple inbound data links listed in the `source` field. The values from the input links are merged depending on the method specified in the `linkMerge` field. If not specified, the default method is "merge_nested". * **merge_nested** The input must be an array consisting of exactly one entry for each input link. If "merge_nested" is specified with a single link, the value from the link must be wrapped in a single-item list. * **merge_flattened** 1. The source and sink parameters must be compatible types, or the source type must be compatible with single element from the "items" type of the destination array parameter. 2. Source parameters which are arrays are concatenated. Source parameters which are single element types are appended as single elements. # Picking non-null values among inbound data links If present, `pickValue` specifies how to picking non-null values among inbound data links. `pickValue` is evaluated 1. Once all source values from upstream step or parameters are available. 2. After `linkMerge`. 3. Before `scatter` or `valueFrom`. This is specifically intended to be useful in combination with [conditional execution](#WorkflowStep), where several upstream steps may be connected to a single input (`source` is a list), and skipped steps produce null values. Static type checkers should check for type consistency after infering what the type will be after `pickValue` is applied, just as they do currently for `linkMerge`. * **first_non_null** For the first level of a list input, pick the first non-null element. The result is a scalar. It is an error if there is no non-null element. Examples: * `[null, x, null, y] -> x` * `[null, [null], null, y] -> [null]` * `[null, null, null] -> Runtime Error` *Intended use case*: If-else pattern where the value comes either from a conditional step or from a default or fallback value. The conditional step(s) should be placed first in the list. * **the_only_non_null** For the first level of a list input, pick the single non-null element. The result is a scalar. It is an error if there is more than one non-null element. Examples: * `[null, x, null] -> x` * `[null, x, null, y] -> Runtime Error` * `[null, [null], null] -> [null]` * `[null, null, null] -> Runtime Error` *Intended use case*: Switch type patterns where developer considers more than one active code path as a workflow error (possibly indicating an error in writing `when` condition expressions). * **all_non_null** For the first level of a list input, pick all non-null values. The result is a list, which may be empty. Examples: * `[null, x, null] -> [x]` * `[x, null, y] -> [x, y]` * `[null, [x], [null]] -> [[x], [null]]` * `[null, null, null] -> []` *Intended use case*: It is valid to have more than one source, but sources are conditional, so null sources (from skipped steps) should be filtered out. fields: - name: default type: ["null", Any] doc: | The default value for this parameter to use if either there is no `source` field, or the value produced by the `source` is `null`. The default must be applied prior to scattering or evaluating `valueFrom`. jsonldPredicate: _id: "sld:default" noLinkCheck: true - name: valueFrom type: - "null" - string - Expression jsonldPredicate: "cwl:valueFrom" doc: | To use valueFrom, [StepInputExpressionRequirement](#StepInputExpressionRequirement) must be specified in the workflow or workflow step requirements. If `valueFrom` is a constant string value, use this as the value for this input parameter. If `valueFrom` is a parameter reference or expression, it must be evaluated to yield the actual value to be assiged to the input field. The `self` value in the parameter reference or expression must be 1. `null` if there is no `source` field 2. the value of the parameter(s) specified in the `source` field when this workflow input parameter **is not** specified in this workflow step's `scatter` field. 3. an element of the parameter specified in the `source` field when this workflow input parameter **is** specified in this workflow step's `scatter` field. The value of `inputs` in the parameter reference or expression must be the input object to the workflow step after assigning the `source` values, applying `default`, and then scattering. The order of evaluating `valueFrom` among step input parameters is undefined and the result of evaluating `valueFrom` on a parameter must not be visible to evaluation of `valueFrom` on other parameters. - type: record name: WorkflowStepOutput docParent: "#WorkflowStep" extends: Identified doc: | Associate an output parameter of the underlying process with a workflow parameter. The workflow parameter (given in the `id` field) be may be used as a `source` to connect with input parameters of other workflow steps, or with an output parameter of the process. A unique identifier for this workflow output parameter. This is the identifier to use in the `source` field of `WorkflowStepInput` to connect the output value to downstream parameters. - name: ScatterMethod type: enum docParent: "#WorkflowStep" doc: The scatter method, as described in [workflow step scatter](#WorkflowStep). symbols: - dotproduct - nested_crossproduct - flat_crossproduct - name: WorkflowStep type: record extends: [Identified, Labeled, sld:Documented] docParent: "#Workflow" doc: | A workflow step is an executable element of a workflow. It specifies the underlying process implementation (such as `CommandLineTool` or another `Workflow`) in the `run` field and connects the input and output parameters of the underlying process to workflow parameters. # Scatter/gather To use scatter/gather, [ScatterFeatureRequirement](#ScatterFeatureRequirement) must be specified in the workflow or workflow step requirements. A "scatter" operation specifies that the associated workflow step or subworkflow should execute separately over a list of input elements. Each job making up a scatter operation is independent and may be executed concurrently. The `scatter` field specifies one or more input parameters which will be scattered. An input parameter may be listed more than once. The declared type of each input parameter is implicitly becomes an array of items of the input parameter type. If a parameter is listed more than once, it becomes a nested array. As a result, upstream parameters which are connected to scattered parameters must be arrays. All output parameter types are also implicitly wrapped in arrays. Each job in the scatter results in an entry in the output array. If any scattered parameter runtime value is an empty array, all outputs are set to empty arrays and no work is done for the step, according to applicable scattering rules. If `scatter` declares more than one input parameter, `scatterMethod` describes how to decompose the input into a discrete set of jobs. * **dotproduct** specifies that each of the input arrays are aligned and one element taken from each array to construct each job. It is an error if all input arrays are not the same length. * **nested_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output must be nested arrays for each level of scattering, in the order that the input arrays are listed in the `scatter` field. * **flat_crossproduct** specifies the Cartesian product of the inputs, producing a job for every combination of the scattered inputs. The output arrays must be flattened to a single level, but otherwise listed in the order that the input arrays are listed in the `scatter` field. # Conditional execution Conditional execution makes execution of a step conditional on an expression. A step that is not executed is "skipped". A skipped step produces `null` for all output parameters. The condition is evaluated after `scatter`, using the input object of each individual scatter job. This means over a set of scatter jobs, some may be executed and some may be skipped. When the results are gathered, skipped steps must be `null` in the output arrays. The `when` field controls conditional execution. This is an expression that must be evaluated with `inputs` bound to the step input object (or individual scatter job), and returns a boolean value. It is an error if this expression returns a value other than `true` or `false`. # Subworkflows To specify a nested workflow as part of a workflow step, [SubworkflowFeatureRequirement](#SubworkflowFeatureRequirement) must be specified in the workflow or workflow step requirements. It is a fatal error if a workflow directly or indirectly invokes itself as a subworkflow (recursive workflows are not allowed). fields: - name: in type: WorkflowStepInput[] jsonldPredicate: _id: "cwl:in" mapSubject: id mapPredicate: source doc: | Defines the input parameters of the workflow step. The process is ready to run when all required input parameters are associated with concrete values. Input parameters include a schema for each parameter which is used to validate the input object. It may also be used build a user interface for constructing the input object. - name: out type: - type: array items: [string, WorkflowStepOutput] jsonldPredicate: _id: "cwl:out" _type: "@id" identity: true doc: | Defines the parameters representing the output of the process. May be used to generate and/or validate the output object. - name: requirements type: ProcessRequirement[]? jsonldPredicate: _id: "cwl:requirements" mapSubject: class doc: | Declares requirements that apply to either the runtime environment or the workflow engine that must be met in order to execute this workflow step. If an implementation cannot satisfy all requirements, or a requirement is listed which is not recognized by the implementation, it is a fatal error and the implementation must not attempt to run the process, unless overridden at user option. - name: hints type: Any[]? jsonldPredicate: _id: "cwl:hints" noLinkCheck: true mapSubject: class doc: | Declares hints applying to either the runtime environment or the workflow engine that may be helpful in executing this workflow step. It is not an error if an implementation cannot satisfy all hints, however the implementation may report a warning. - name: run type: [string, Process] jsonldPredicate: _id: "cwl:run" _type: "@id" subscope: run doc: | Specifies the process to run. - name: when type: - "null" - Expression jsonldPredicate: "cwl:when" doc: | If defined, only run the step when the expression evaluates to `true`. If `false` the step is skipped. A skipped step produces a `null` on each output. - name: scatter type: - string? - string[]? jsonldPredicate: "_id": "cwl:scatter" "_type": "@id" "_container": "@list" refScope: 0 - name: scatterMethod doc: | Required if `scatter` is an array of more than one element. type: ScatterMethod? jsonldPredicate: "_id": "cwl:scatterMethod" "_type": "@vocab" - name: Workflow type: record extends: "#Process" documentRoot: true specialize: - specializeFrom: InputParameter specializeTo: WorkflowInputParameter - specializeFrom: OutputParameter specializeTo: WorkflowOutputParameter doc: | A workflow describes a set of **steps** and the **dependencies** between those steps. When a step produces output that will be consumed by a second step, the first step is a dependency of the second step. When there is a dependency, the workflow engine must execute the preceeding step and wait for it to successfully produce output before executing the dependent step. If two steps are defined in the workflow graph that are not directly or indirectly dependent, these steps are **independent**, and may execute in any order or execute concurrently. A workflow is complete when all steps have been executed. Dependencies between parameters are expressed using the `source` field on [workflow step input parameters](#WorkflowStepInput) and [workflow output parameters](#WorkflowOutputParameter). The `source` field expresses the dependency of one parameter on another such that when a value is associated with the parameter specified by `source`, that value is propagated to the destination parameter. When all data links inbound to a given step are fufilled, the step is ready to execute. ## Workflow success and failure A completed step must result in one of `success`, `temporaryFailure` or `permanentFailure` states. An implementation may choose to retry a step execution which resulted in `temporaryFailure`. An implementation may choose to either continue running other steps of a workflow, or terminate immediately upon `permanentFailure`. * If any step of a workflow execution results in `permanentFailure`, then the workflow status is `permanentFailure`. * If one or more steps result in `temporaryFailure` and all other steps complete `success` or are not executed, then the workflow status is `temporaryFailure`. * If all workflow steps are executed and complete with `success`, then the workflow status is `success`. # Extensions [ScatterFeatureRequirement](#ScatterFeatureRequirement) and [SubworkflowFeatureRequirement](#SubworkflowFeatureRequirement) are available as standard [extensions](#Extensions_and_Metadata) to core workflow semantics. fields: - name: "class" jsonldPredicate: "_id": "@type" "_type": "@vocab" type: string - name: steps doc: | The individual steps that make up the workflow. Each step is executed when all of its input data links are fufilled. An implementation may choose to execute the steps in a different order than listed and/or execute steps concurrently, provided that dependencies between steps are met. type: - type: array items: "#WorkflowStep" jsonldPredicate: mapSubject: id - type: record name: SubworkflowFeatureRequirement extends: ProcessRequirement doc: | Indicates that the workflow platform must support nested workflows in the `run` field of [WorkflowStep](#WorkflowStep). fields: - name: "class" type: "string" doc: "Always 'SubworkflowFeatureRequirement'" jsonldPredicate: "_id": "@type" "_type": "@vocab" - name: ScatterFeatureRequirement type: record extends: ProcessRequirement doc: | Indicates that the workflow platform must support the `scatter` and `scatterMethod` fields of [WorkflowStep](#WorkflowStep). fields: - name: "class" type: "string" doc: "Always 'ScatterFeatureRequirement'" jsonldPredicate: "_id": "@type" "_type": "@vocab" - name: MultipleInputFeatureRequirement type: record extends: ProcessRequirement doc: | Indicates that the workflow platform must support multiple inbound data links listed in the `source` field of [WorkflowStepInput](#WorkflowStepInput). fields: - name: "class" type: "string" doc: "Always 'MultipleInputFeatureRequirement'" jsonldPredicate: "_id": "@type" "_type": "@vocab" - type: record name: StepInputExpressionRequirement extends: ProcessRequirement doc: | Indicate that the workflow platform must support the `valueFrom` field of [WorkflowStepInput](#WorkflowStepInput). fields: - name: "class" type: "string" doc: "Always 'StepInputExpressionRequirement'" jsonldPredicate: "_id": "@type" "_type": "@vocab" - {$import: Operation.yml}