Viash Component Config
0.9.1Schema URL
One of
Definitions
A Viash configuration is a YAML file which contains metadata to describe the behaviour and build target(s) of a component.
We commonly name this file config.vsh.yaml in our examples, but you can name it however you choose.
Name of the component and the filename of the executable when built with viash build.
A clean version of the component's name. This is only used for documentation.
The license of the package.
A list of authors. An author must at least have a name, but can also have a list of roles, an e-mail address, and a map of custom properties.
Suggested values for roles are:
| Role | Abbrev. | Description |
|---|---|---|
| maintainer | mnt | for the maintainer of the code. Ideally, exactly one maintainer is specified. |
| author | aut | for persons who have made substantial contributions to the software. |
| contributor | ctb | for persons who have made smaller contributions (such as code patches). |
| datacontributor | dtc | for persons or organisations that contributed data sets for the software |
| copyrightholder | cph | for all copyright holders. This is a legal concept so should use the legal name of an institution or corporate body. |
| funder | fnd | for persons or organizations that furnished financial support for the development of the software |
The full list of roles is extremely comprehensive.
Allows setting a component to active, deprecated or disabled.
Computational requirements related to running the component.
3 nested properties
The maximum number of (logical) cpus a component is allowed to use.
A list of commands which should be present on the system for the script to function.
The maximum amount of memory a component is allowed to allocate. Unit must be one of B, KB, MB, GB, TB or PB for SI units (1000-base), or KiB, MiB, GiB, TiB or PiB for binary IEC units (1024-base).
(Pre-)defines repositories that can be used as repository in dependencies. Allows reusing repository definitions in case it is used in multiple dependencies.
Allows listing Viash components required by this Viash component
A one-sentence summary of the component. This is only used for documentation.
The functionality-part of the config file describes the behaviour of the script in terms of arguments and resources. By specifying a few restrictions (e.g. mandatory arguments) and adding some descriptions, Viash will automatically generate a stylish command-line interface for you.
20 nested properties
Name of the component and the filename of the executable when built with viash build.
The organization of the package.
A grouping of the arguments, used to display the help message.
name: foo, the name of the argument group.description: Description of foo, a description of the argument group. Multiline descriptions are supported.arguments: [arg1, arg2, ...], list of the arguments.
Structured information. Can be any shape: a string, vector, map or even nested map.
The license of the package.
A list of scholarly sources or publications relevant to the tools or analysis defined in the component. This is important for attribution, scientific reproducibility and transparency.
2 nested properties
A list of authors. An author must at least have a name, but can also have a list of roles, an e-mail address, and a map of custom properties.
Suggested values for roles are:
| Role | Abbrev. | Description |
|---|---|---|
| maintainer | mnt | for the maintainer of the code. Ideally, exactly one maintainer is specified. |
| author | aut | for persons who have made substantial contributions to the software. |
| contributor | ctb | for persons who have made smaller contributions (such as code patches). |
| datacontributor | dtc | for persons or organisations that contributed data sets for the software |
| copyrightholder | cph | for all copyright holders. This is a legal concept so should use the legal name of an institution or corporate body. |
| funder | fnd | for persons or organizations that furnished financial support for the development of the software |
The full list of roles is extremely comprehensive.
Allows setting a component to active, deprecated or disabled.
Computational requirements related to running the component.
3 nested properties
The maximum number of (logical) cpus a component is allowed to use.
A list of commands which should be present on the system for the script to function.
The maximum amount of memory a component is allowed to allocate. Unit must be one of B, KB, MB, GB, TB or PB for SI units (1000-base), or KiB, MiB, GiB, TiB or PiB for binary IEC units (1024-base).
(Pre-)defines repositories that can be used as repository in dependencies. Allows reusing repository definitions in case it is used in multiple dependencies.
One or more scripts to be used to test the component behaviour when viash test is invoked. Additional files of type file will be made available only during testing. Each test script should expect no command-line inputs, be platform-independent, and return an exit code >0 when unexpected behaviour occurs during testing. See Unit Testing for more info.
Allows listing Viash components required by this Viash component
A description of the component. This will be displayed with --help.
A description on how to use the component. This will be displayed with --help under the 'Usage:' section.
Version of the component. This field will be used to version the executable and the Docker container.
Links to external resources related to the component.
5 nested properties
Source repository url.
Documentation website url.
Docker registry url.
Homepage website url.
Issue tracker url.
Resources are files that support the component. The first resource should be a script that will be executed when the functionality is run. Additional resources will be copied to the same directory.
Common properties:
- type:
file/r_script/python_script/bash_script/javascript_script/scala_script/csharp_script, specifies the type of the resource. The first resource cannot be of typefile. When the type is not specified, the default type is simplyfile. - dest: filename, the resulting name of the resource. From within a script, the file can be accessed at
meta["resources_dir"] + "/" + dest. If unspecified,destwill be set to the basename of thepathparameter. - path:
path/to/file, the path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive withtext. - text: ...multiline text..., the content of the resulting file specified as a string. Mutually exclusive with
path. - is_executable:
true/false, whether the resulting resource file should be made executable.
The keywords of the components.
Namespace this component is a part of. See the Namespaces guide for more information on namespaces.
A list of arguments for this component. For each argument, a type and a name must be specified. Depending on the type of argument, different properties can be set. See these reference pages per type for more information:
- string
- file
- integer
- double
- boolean
- boolean_true
- boolean_false
A list of runners to execute target artifacts.
- ExecutableRunner
- NextflowRunner
Meta information fields filled in by Viash during build.
10 nested properties
Path to the config used during build.
Git tag.
Git remote name.
The Viash version that was used to build the component.
Folder path to the build artifacts.
Git commit hash.
The engine id used during build.
The runner id used during build.
List of dependencies used during build.
Output folder with main executable path.
A grouping of the arguments, used to display the help message.
name: foo, the name of the argument group.description: Description of foo, a description of the argument group. Multiline descriptions are supported.arguments: [arg1, arg2, ...], list of the arguments.
A description of the component. This is only used for documentation. Multiline descriptions are supported.
A description on how to use the component. This will be displayed with --help under the 'Usage:' section.
Structured information. Can be any shape: a string, vector, map or even nested map.
A Viash package configuration file. It's name should be _viash.yaml.
17 nested properties
The organization of the package.
The name of the package.
Which source directory to use for the viash ns commands.
A description of the package. This is only used for documentation. Multiline descriptions are supported.
Structured information. Can be any shape: a string, vector, map or even nested map.
The license of the package.
A list of scholarly sources or publications relevant to the tools or analysis defined in the component. This is important for attribution, scientific reproducibility and transparency.
2 nested properties
The authors of the package.
Common repository definitions for component dependencies.
The keywords of the package.
Which target directory to use for viash ns build.
A one-sentence summary of the package. This is only used for documentation.
Which version of Viash to use.
A clean version of the package name. This is only used for documentation.
The version of the package.
Links to external resources related to the component.
5 nested properties
Source repository url.
Documentation website url.
Docker registry url.
Homepage website url.
Issue tracker url.
A list of platforms to generate target artifacts for.
- Native
- Docker
- Nextflow
Version of the component. This field will be used to version the executable and the Docker container.
Links to external resources related to the component.
5 nested properties
Source repository url.
Documentation website url.
Docker registry url.
Homepage website url.
Issue tracker url.
A list of scholarly sources or publications relevant to the tools or analysis defined in the component. This is important for attribution, scientific reproducibility and transparency.
2 nested properties
A list of engine environments to execute target artifacts in.
- NativeEngine
- DockerEngine
Resources are files that support the component. The first resource should be a script that will be executed when the component is run. Additional resources will be copied to the same directory.
Common properties:
- type:
file/r_script/python_script/bash_script/javascript_script/scala_script/csharp_script, specifies the type of the resource. The first resource cannot be of typefile. When the type is not specified, the default type is simplyfile. - dest: filename, the resulting name of the resource. From within a script, the file can be accessed at
meta["resources_dir"] + "/" + dest. If unspecified,destwill be set to the basename of thepathparameter. - path:
path/to/file, the path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive withtext. - text: ...multiline text..., the content of the resulting file specified as a string. Mutually exclusive with
path. - is_executable:
true/false, whether the resulting resource file should be made executable.
The keywords of the components.
One or more scripts to be used to test the component behaviour when viash test is invoked. Additional files of type file will be made available only during testing. Each test script should expect no command-line inputs, be platform-independent, and return an exit code >0 when unexpected behaviour occurs during testing. See Unit Testing for more info.
Namespace this component is a part of. See the Namespaces guide for more information on namespaces.
A list of arguments for this component. For each argument, a type and a name must be specified. Depending on the type of argument, different properties can be set. See these reference pages per type for more information:
- string
- file
- integer
- double
- boolean
- boolean_true
- boolean_false
A Viash package configuration file. It's name should be _viash.yaml.
The organization of the package.
The name of the package.
Which source directory to use for the viash ns commands.
A description of the package. This is only used for documentation. Multiline descriptions are supported.
Structured information. Can be any shape: a string, vector, map or even nested map.
The license of the package.
A list of scholarly sources or publications relevant to the tools or analysis defined in the component. This is important for attribution, scientific reproducibility and transparency.
2 nested properties
The authors of the package.
Common repository definitions for component dependencies.
The keywords of the package.
Which target directory to use for viash ns build.
A one-sentence summary of the package. This is only used for documentation.
Which version of Viash to use.
A clean version of the package name. This is only used for documentation.
The version of the package.
Links to external resources related to the component.
5 nested properties
Source repository url.
Documentation website url.
Docker registry url.
Homepage website url.
Issue tracker url.
Meta information fields filled in by Viash during build.
Path to the config used during build.
Git tag.
Git remote name.
The Viash version that was used to build the component.
Folder path to the build artifacts.
Git commit hash.
The engine id used during build.
The runner id used during build.
List of dependencies used during build.
Output folder with main executable path.
The functionality-part of the config file describes the behaviour of the script in terms of arguments and resources. By specifying a few restrictions (e.g. mandatory arguments) and adding some descriptions, Viash will automatically generate a stylish command-line interface for you.
Name of the component and the filename of the executable when built with viash build.
The organization of the package.
A grouping of the arguments, used to display the help message.
name: foo, the name of the argument group.description: Description of foo, a description of the argument group. Multiline descriptions are supported.arguments: [arg1, arg2, ...], list of the arguments.
Structured information. Can be any shape: a string, vector, map or even nested map.
The license of the package.
A list of scholarly sources or publications relevant to the tools or analysis defined in the component. This is important for attribution, scientific reproducibility and transparency.
2 nested properties
A list of authors. An author must at least have a name, but can also have a list of roles, an e-mail address, and a map of custom properties.
Suggested values for roles are:
| Role | Abbrev. | Description |
|---|---|---|
| maintainer | mnt | for the maintainer of the code. Ideally, exactly one maintainer is specified. |
| author | aut | for persons who have made substantial contributions to the software. |
| contributor | ctb | for persons who have made smaller contributions (such as code patches). |
| datacontributor | dtc | for persons or organisations that contributed data sets for the software |
| copyrightholder | cph | for all copyright holders. This is a legal concept so should use the legal name of an institution or corporate body. |
| funder | fnd | for persons or organizations that furnished financial support for the development of the software |
The full list of roles is extremely comprehensive.
Allows setting a component to active, deprecated or disabled.
Computational requirements related to running the component.
3 nested properties
The maximum number of (logical) cpus a component is allowed to use.
A list of commands which should be present on the system for the script to function.
The maximum amount of memory a component is allowed to allocate. Unit must be one of B, KB, MB, GB, TB or PB for SI units (1000-base), or KiB, MiB, GiB, TiB or PiB for binary IEC units (1024-base).
(Pre-)defines repositories that can be used as repository in dependencies. Allows reusing repository definitions in case it is used in multiple dependencies.
One or more scripts to be used to test the component behaviour when viash test is invoked. Additional files of type file will be made available only during testing. Each test script should expect no command-line inputs, be platform-independent, and return an exit code >0 when unexpected behaviour occurs during testing. See Unit Testing for more info.
Allows listing Viash components required by this Viash component
A description of the component. This will be displayed with --help.
A description on how to use the component. This will be displayed with --help under the 'Usage:' section.
Version of the component. This field will be used to version the executable and the Docker container.
Links to external resources related to the component.
5 nested properties
Source repository url.
Documentation website url.
Docker registry url.
Homepage website url.
Issue tracker url.
Resources are files that support the component. The first resource should be a script that will be executed when the functionality is run. Additional resources will be copied to the same directory.
Common properties:
- type:
file/r_script/python_script/bash_script/javascript_script/scala_script/csharp_script, specifies the type of the resource. The first resource cannot be of typefile. When the type is not specified, the default type is simplyfile. - dest: filename, the resulting name of the resource. From within a script, the file can be accessed at
meta["resources_dir"] + "/" + dest. If unspecified,destwill be set to the basename of thepathparameter. - path:
path/to/file, the path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive withtext. - text: ...multiline text..., the content of the resulting file specified as a string. Mutually exclusive with
path. - is_executable:
true/false, whether the resulting resource file should be made executable.
The keywords of the components.
Namespace this component is a part of. See the Namespaces guide for more information on namespaces.
A list of arguments for this component. For each argument, a type and a name must be specified. Depending on the type of argument, different properties can be set. See these reference pages per type for more information:
- string
- file
- integer
- double
- boolean
- boolean_true
- boolean_false
Author metadata.
Full name of the author, usually in the name of FirstName MiddleName LastName.
E-mail of the author.
Structured information. Can be any shape: a string, vector, map or even nested map.
Computational requirements related to running the component.
The maximum number of (logical) cpus a component is allowed to use.
A list of commands which should be present on the system for the script to function.
The maximum amount of memory a component is allowed to allocate. Unit must be one of B, KB, MB, GB, TB or PB for SI units (1000-base), or KiB, MiB, GiB, TiB or PiB for binary IEC units (1024-base).
A grouping of the arguments, used to display the help message.
The name of the argument group.
A description of the argument group. This is only used for documentation. Multiline descriptions are supported.
A clean version of the argument group's name. This is only used for documentation.
A one-sentence summary of the argument group. This is only used for documentation.
A list of arguments for this component. For each argument, a type and a name must be specified. Depending on the type of argument, different properties can be set. See these reference pages per type for more information:
- string
- file
- integer
- double
- boolean
- boolean_true
- boolean_false
Links to external resources related to the component.
Source repository url.
Documentation website url.
Docker registry url.
Homepage website url.
Issue tracker url.
A list of scholarly sources or publications relevant to the tools or analysis defined in the component. This is important for attribution, scientific reproducibility and transparency.
Defines the scope of the component.
test: only available during testing; components aren't published.
private: only meant for internal use within a workflow or other component.
public: core component or workflow meant for general use.
Run code as an executable.
This runner is the default runner. It will generate a bash script that can be run directly.
This runner is also used for the native engine.
This runner is also used for the docker engine.
Run code as an executable.
This runner is the default runner. It will generate a bash script that can be run directly.
This runner is also used for the native engine.
This runner is also used for the docker engine.
The Docker setup strategy to use when building a container.
The working directory when starting the engine. This doesn't change the Dockerfile but gets added as a command-line argument at runtime.
Name of the runner. As with all runners, you can give an runner a different name. By specifying id: foo, you can target this executor (only) by specifying ... in any of the Viash commands.
Run a Viash component on a Nextflow backend engine.
Run a Viash component on a Nextflow backend engine.
Automated processing flags which can be toggled on or off.
4 nested properties
If true, an input tuple only containing only a single File (e.g. ["foo", file("in.h5ad")]) is automatically transformed to a map (i.e. ["foo", [ input: file("in.h5ad") ] ]).
Default: true.
If true, an output tuple containing a map with a File (e.g. ["foo", [ output: file("out.h5ad") ] ]) is automatically transformed to a map (i.e. ["foo", file("out.h5ad")]).
Default: false.
If true, the module's transcripts from work/ are automatically published to params.transcriptDir.
If not defined, params.publishDir + "/_transcripts" will be used.
Will throw an error if neither are defined.
Default: false.
Directives are optional settings that affect the execution of the process.
29 nested properties
The beforeScript directive allows you to execute a custom (Bash) snippet before the main process script is run. This may be useful to initialise the underlying cluster environment or for other custom initialisation.
See beforeScript.
The accelerator directive allows you to specify the hardware accelerator requirement for the task execution e.g. GPU processor.
Viash implements this directive as a map with accepted keywords: type, limit, request, and runtime.
See accelerator.
The time directive allows you to define how long a process is allowed to run.
See time.
The afterScript directive allows you to execute a custom (Bash) snippet immediately after the main process has run. This may be useful to clean up your staging area.
See afterScript.
The executor defines the underlying system where processes are executed. By default a process uses the executor defined globally in the nextflow.config file.
The executor directive allows you to configure what executor has to be used by the process, overriding the default configuration. The following values can be used:
| Name | Executor |
|---|---|
| awsbatch | The process is executed using the AWS Batch service. |
| azurebatch | The process is executed using the Azure Batch service. |
| condor | The process is executed using the HTCondor job scheduler. |
| google-lifesciences | The process is executed using the Google Genomics Pipelines service. |
| ignite | The process is executed using the Apache Ignite cluster. |
| k8s | The process is executed using the Kubernetes cluster. |
| local | The process is executed in the computer where Nextflow is launched. |
| lsf | The process is executed using the Platform LSF job scheduler. |
| moab | The process is executed using the Moab job scheduler. |
| nqsii | The process is executed using the NQSII job scheduler. |
| oge | Alias for the sge executor. |
| pbs | The process is executed using the PBS/Torque job scheduler. |
| pbspro | The process is executed using the PBS Pro job scheduler. |
| sge | The process is executed using the Sun Grid Engine / Open Grid Engine. |
| slurm | The process is executed using the SLURM job scheduler. |
| tes | The process is executed using the GA4GH TES service. |
| uge | Alias for the sge executor. |
See executor.
The disk directive allows you to define how much local disk storage the process is allowed to use.
See disk.
The tag directive allows you to associate each process execution with a custom label, so that it will be easier to identify them in the log file or in the trace execution report.
For ease of use, the default tag is set to "$id", which allows tracking the progression of the channel events through the workflow more easily.
See tag.
The machineType can be used to specify a predefined Google Compute Platform machine type when running using the Google Life Sciences executor.
See machineType.
The stageInMode directive defines how input files are staged-in to the process work directory. The following values are allowed:
| Value | Description |
|---|---|
| copy | Input files are staged in the process work directory by creating a copy. |
| link | Input files are staged in the process work directory by creating an (hard) link for each of them. |
| symlink | Input files are staged in the process work directory by creating a symbolic link with an absolute path for each of them (default). |
| rellink | Input files are staged in the process work directory by creating a symbolic link with a relative path for each of them. |
See stageInMode.
The penv directive allows you to define the parallel environment to be used when submitting a parallel task to the SGE resource manager.
See penv.
The storeDir directive allows you to define a directory that is used as a permanent cache for your process results.
See storeDir.
The errorStrategy directive allows you to define how an error condition is managed by the process. By default when an error status is returned by the executed script, the process stops immediately. This in turn forces the entire pipeline to terminate.
Table of available error strategies:
| Name | Executor |
|---|---|
terminate | Terminates the execution as soon as an error condition is reported. Pending jobs are killed (default) |
finish | Initiates an orderly pipeline shutdown when an error condition is raised, waiting the completion of any submitted job. |
ignore | Ignores processes execution errors. |
retry | Re-submit for execution a process returning an error condition. |
See errorStrategy.
The memory directive allows you to define how much memory the process is allowed to use.
See memory.
The stageOutMode directive defines how output files are staged-out from the scratch directory to the process work directory. The following values are allowed:
| Value | Description |
|---|---|
| copy | Output files are copied from the scratch directory to the work directory. |
| move | Output files are moved from the scratch directory to the work directory. |
| rsync | Output files are copied from the scratch directory to the work directory by using the rsync utility. |
See stageOutMode.
Specifies the Docker engine id to be used to run Nextflow.
Allows tweaking how the Nextflow Config file is generated.
2 nested properties
A series of default labels to specify memory and cpu constraints.
The default memory labels are defined as "mem1gb", "mem2gb", "mem4gb", ... upto "mem512tb" and follows powers of 2. The default cpu labels are defined as "cpu1", "cpu2", "cpu5", "cpu10", ... upto "cpu1000" and follows a semi logarithmic scale (1, 2, 5 per decade).
Conceptually it is possible for a Viash Config to overwrite the full labels parameter, however likely it is more efficient to add additional labels in the Viash Package with a config mod.
Whether or not to print debug messages.
Name of the runner. As with all runners, you can give an runner a different name. By specifying id: foo, you can target this runner (only) by specifying ... in any of the Viash commands.
Running a Viash component on a native engine means that the script will be executed in your current environment. Any dependencies are assumed to have been installed by the user, so the native engine is meant for developers (who know what they're doing) or for simple bash scripts (which have no extra dependencies).
Running a Viash component on a native engine means that the script will be executed in your current environment. Any dependencies are assumed to have been installed by the user, so the native engine is meant for developers (who know what they're doing) or for simple bash scripts (which have no extra dependencies).
Name of the engine. As with all engines, you can give an engine a different name. By specifying id: foo, you can target this engine (only) by specifying ... in any of the Viash commands.
Run a Viash component on a Docker backend engine. By specifying which dependencies your component needs, users will be able to build a docker container from scratch using the setup flag, or pull it from a docker repository.
The base container to start from. You can also add the tag here if you wish.
Run a Viash component on a Docker backend engine. By specifying which dependencies your component needs, users will be able to build a docker container from scratch using the setup flag, or pull it from a docker repository.
Name of a start container's organization.
The URL to the a custom Docker registry where the start container is located.
Specify a Docker image based on its tag.
If anything is specified in the setup section, running the ---setup will result in an image with the name of <target_image>:<version>. If nothing is specified in the setup section, simply image will be used. Advanced usage only.
The tag the resulting image gets. Advanced usage only.
The separator between the namespace and the name of the component, used for determining the image name. Default: "/".
The package name set in the resulting image. Advanced usage only.
Name of the engine. As with all engines, you can give a engine a different name. By specifying id: foo, you can target this engine (only) by specifying ... in any of the Viash commands.
The URL where the resulting image will be pushed to. Advanced usage only.
The organization set in the resulting image. Advanced usage only.
A list of requirements for installing the following types of packages:
- apt
- apk
- Docker setup instructions
- JavaScript
- Python
- R
- Ruby
- yum
The order in which these dependencies are specified determines the order in which they will be installed.
The source of the target image. This is used for defining labels in the dockerfile.
Additional requirements specific for running unit tests.
Running a Viash component on a native platform means that the script will be executed in your current environment. Any dependencies are assumed to have been installed by the user, so the native platform is meant for developers (who know what they're doing) or for simple bash scripts (which have no extra dependencies).
Running a Viash component on a native platform means that the script will be executed in your current environment. Any dependencies are assumed to have been installed by the user, so the native platform is meant for developers (who know what they're doing) or for simple bash scripts (which have no extra dependencies).
As with all platforms, you can give a platform a different name. By specifying id: foo, you can target this platform (only) by specifying -p foo in any of the Viash commands.
Run a Viash component on a Docker backend platform. By specifying which dependencies your component needs, users will be able to build a docker container from scratch using the setup flag, or pull it from a docker repository.
The base container to start from. You can also add the tag here if you wish.
Run a Viash component on a Docker backend platform. By specifying which dependencies your component needs, users will be able to build a docker container from scratch using the setup flag, or pull it from a docker repository.
Name of a container's organization.
The URL to the a custom Docker registry
Specify a Docker image based on its tag.
The tag the resulting image gets. Advanced usage only.
The separator between the namespace and the name of the component, used for determining the image name. Default: "/".
Enables or disables automatic volume mapping. Enabled when set to Automatic or disabled when set to Manual. Default: Automatic
As with all platforms, you can give a platform a different name. By specifying id: foo, you can target this platform (only) by specifying -p foo in any of the Viash commands.
The URL where the resulting image will be pushed to. Advanced usage only.
A list of requirements for installing the following types of packages:
- apt
- apk
- Docker setup instructions
- JavaScript
- Python
- R
- Ruby
- yum
The order in which these dependencies are specified determines the order in which they will be installed.
The working directory when starting the container. This doesn't change the Dockerfile but gets added as a command-line argument at runtime.
If anything is specified in the setup section, running the ---setup will result in an image with the name of <target_image>:<version>. If nothing is specified in the setup section, simply image will be used. Advanced usage only.
The source of the target image. This is used for defining labels in the dockerfile.
Additional requirements specific for running unit tests.
The Docker setup strategy to use when building a container.
The organization set in the resulting image. Advanced usage only.
Platform for generating Nextflow VDSL3 modules.
Platform for generating Nextflow VDSL3 modules.
Automated processing flags which can be toggled on or off.
4 nested properties
If true, an input tuple only containing only a single File (e.g. ["foo", file("in.h5ad")]) is automatically transformed to a map (i.e. ["foo", [ input: file("in.h5ad") ] ]).
Default: true.
If true, an output tuple containing a map with a File (e.g. ["foo", [ output: file("out.h5ad") ] ]) is automatically transformed to a map (i.e. ["foo", file("out.h5ad")]).
Default: false.
If true, the module's transcripts from work/ are automatically published to params.transcriptDir.
If not defined, params.publishDir + "/_transcripts" will be used.
Will throw an error if neither are defined.
Default: false.
Directives are optional settings that affect the execution of the process.
29 nested properties
The beforeScript directive allows you to execute a custom (Bash) snippet before the main process script is run. This may be useful to initialise the underlying cluster environment or for other custom initialisation.
See beforeScript.
The accelerator directive allows you to specify the hardware accelerator requirement for the task execution e.g. GPU processor.
Viash implements this directive as a map with accepted keywords: type, limit, request, and runtime.
See accelerator.
The time directive allows you to define how long a process is allowed to run.
See time.
The afterScript directive allows you to execute a custom (Bash) snippet immediately after the main process has run. This may be useful to clean up your staging area.
See afterScript.
The executor defines the underlying system where processes are executed. By default a process uses the executor defined globally in the nextflow.config file.
The executor directive allows you to configure what executor has to be used by the process, overriding the default configuration. The following values can be used:
| Name | Executor |
|---|---|
| awsbatch | The process is executed using the AWS Batch service. |
| azurebatch | The process is executed using the Azure Batch service. |
| condor | The process is executed using the HTCondor job scheduler. |
| google-lifesciences | The process is executed using the Google Genomics Pipelines service. |
| ignite | The process is executed using the Apache Ignite cluster. |
| k8s | The process is executed using the Kubernetes cluster. |
| local | The process is executed in the computer where Nextflow is launched. |
| lsf | The process is executed using the Platform LSF job scheduler. |
| moab | The process is executed using the Moab job scheduler. |
| nqsii | The process is executed using the NQSII job scheduler. |
| oge | Alias for the sge executor. |
| pbs | The process is executed using the PBS/Torque job scheduler. |
| pbspro | The process is executed using the PBS Pro job scheduler. |
| sge | The process is executed using the Sun Grid Engine / Open Grid Engine. |
| slurm | The process is executed using the SLURM job scheduler. |
| tes | The process is executed using the GA4GH TES service. |
| uge | Alias for the sge executor. |
See executor.
The disk directive allows you to define how much local disk storage the process is allowed to use.
See disk.
The tag directive allows you to associate each process execution with a custom label, so that it will be easier to identify them in the log file or in the trace execution report.
For ease of use, the default tag is set to "$id", which allows tracking the progression of the channel events through the workflow more easily.
See tag.
The machineType can be used to specify a predefined Google Compute Platform machine type when running using the Google Life Sciences executor.
See machineType.
The stageInMode directive defines how input files are staged-in to the process work directory. The following values are allowed:
| Value | Description |
|---|---|
| copy | Input files are staged in the process work directory by creating a copy. |
| link | Input files are staged in the process work directory by creating an (hard) link for each of them. |
| symlink | Input files are staged in the process work directory by creating a symbolic link with an absolute path for each of them (default). |
| rellink | Input files are staged in the process work directory by creating a symbolic link with a relative path for each of them. |
See stageInMode.
The penv directive allows you to define the parallel environment to be used when submitting a parallel task to the SGE resource manager.
See penv.
The storeDir directive allows you to define a directory that is used as a permanent cache for your process results.
See storeDir.
The errorStrategy directive allows you to define how an error condition is managed by the process. By default when an error status is returned by the executed script, the process stops immediately. This in turn forces the entire pipeline to terminate.
Table of available error strategies:
| Name | Executor |
|---|---|
terminate | Terminates the execution as soon as an error condition is reported. Pending jobs are killed (default) |
finish | Initiates an orderly pipeline shutdown when an error condition is raised, waiting the completion of any submitted job. |
ignore | Ignores processes execution errors. |
retry | Re-submit for execution a process returning an error condition. |
See errorStrategy.
The memory directive allows you to define how much memory the process is allowed to use.
See memory.
The stageOutMode directive defines how output files are staged-out from the scratch directory to the process work directory. The following values are allowed:
| Value | Description |
|---|---|
| copy | Output files are copied from the scratch directory to the work directory. |
| move | Output files are moved from the scratch directory to the work directory. |
| rsync | Output files are copied from the scratch directory to the work directory by using the rsync utility. |
See stageOutMode.
Specifies the Docker platform id to be used to run Nextflow.
Allows tweaking how the Nextflow Config file is generated.
2 nested properties
A series of default labels to specify memory and cpu constraints.
The default memory labels are defined as "mem1gb", "mem2gb", "mem4gb", ... upto "mem512tb" and follows powers of 2. The default cpu labels are defined as "cpu1", "cpu2", "cpu5", "cpu10", ... upto "cpu1000" and follows a semi logarithmic scale (1, 2, 5 per decade).
Conceptually it is possible for a Viash Config to overwrite the full labels parameter, however likely it is more efficient to add additional labels in the Viash Package with a config mod.
Whether or not to print debug messages.
Every platform can be given a specific id that can later be referred to explicitly when running or building the Viash component.
Specify which apk packages should be available in order to run the component.
Specify which apk packages should be available in order to run the component.
Specify which apt packages should be available in order to run the component.
Specify which apt packages should be available in order to run the component.
If false, the Debian frontend is set to non-interactive (recommended). Default: false.
Specify which Docker commands should be run during setup.
Specify which Docker commands should be run during setup.
Specify which JavaScript packages should be available in order to run the component.
Specify which JavaScript packages should be available in order to run the component.
Specify which Python packages should be available in order to run the component.
Specify which Python packages should be available in order to run the component.
Sets the --upgrade flag when set to true. Default: true.
Sets the --user flag when set to true. Default: false.
Specify which R packages should be available in order to run the component.
Specify which R packages should be available in order to run the component.
Forces packages specified in bioc to be reinstalled, even if they are already present in the container. Default: false.
Specifies whether to treat warnings as errors. Default: true.
Specify which Ruby packages should be available in order to run the component.
Specify which Ruby packages should be available in order to run the component.
Specify which yum packages should be available in order to run the component.
Specify which yum packages should be available in order to run the component.
A boolean type argument has two possible values: true or false.
The name of the argument. Can be in the formats --trim, -t or trim. The number of dashes determines how values can be passed:
--trimis a long option, which can be passed withexecutable_name --trim-tis a short option, which can be passed withexecutable_name -ttrimis an argument, which can be passed withexecutable_name trim
A boolean type argument has two possible values: true or false.
A clean version of the argument's name. This is only used for documentation.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Structured information. Can be any shape: a string, vector, map or even nested map.
A one-sentence summary of the argument. This is only used for documentation.
A description of the argument. This is only used for documentation. Multiline descriptions are supported.
The delimiter character for providing multiple values. : by default.
Treat the argument value as an array. Arrays can be passed using the delimiter --foo=1:2:3 or by providing the same argument multiple times --foo 1 --foo 2. You can use a custom delimiter by using the multiple_sep property. false by default.
Make the value for this argument required. If set to true, an error will be produced if no value was provided. false by default.
An argument of the boolean_true type acts like a boolean flag with a default value of false. When called as an argument it sets the boolean to true.
The name of the argument. Can be in the formats --silent, -s or silent. The number of dashes determines how values can be passed:
--silentis a long option, which can be passed withexecutable_name --silent-sis a short option, which can be passed withexecutable_name -ssilentis an argument, which can be passed withexecutable_name silent
An argument of the boolean_true type acts like a boolean flag with a default value of false. When called as an argument it sets the boolean to true.
A clean version of the argument's name. This is only used for documentation.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Structured information. Can be any shape: a string, vector, map or even nested map.
A one-sentence summary of the argument. This is only used for documentation.
A description of the argument. This is only used for documentation. Multiline descriptions are supported.
An argument of the boolean_false type acts like an inverted boolean flag with a default value of true. When called as an argument it sets the boolean to false.
The name of the argument. Can be in the formats --no-log, -n or no-log. The number of dashes determines how values can be passed:
--no-logis a long option, which can be passed withexecutable_name --no-log-nis a short option, which can be passed withexecutable_name -nno-logis an argument, which can be passed withexecutable_name no-log
An argument of the boolean_false type acts like an inverted boolean flag with a default value of true. When called as an argument it sets the boolean to false.
A clean version of the argument's name. This is only used for documentation.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Structured information. Can be any shape: a string, vector, map or even nested map.
A one-sentence summary of the argument. This is only used for documentation.
A description of the argument. This is only used for documentation. Multiline descriptions are supported.
A double type argument has a numeric value with decimal points
The name of the argument. Can be in the formats --foo, -f or foo. The number of dashes determines how values can be passed:
--foois a long option, which can be passed withexecutable_name --foo=valueorexecutable_name --foo value-fis a short option, which can be passed withexecutable_name -f valuefoois an argument, which can be passed withexecutable_name value
A double type argument has a numeric value with decimal points
A clean version of the argument's name. This is only used for documentation.
Structured information. Can be any shape: a string, vector, map or even nested map.
A one-sentence summary of the argument. This is only used for documentation.
A description of the argument. This is only used for documentation. Multiline descriptions are supported.
The delimiter character for providing multiple values. : by default.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Treat the argument value as an array. Arrays can be passed using the delimiter --foo=1:2:3 or by providing the same argument multiple times --foo 1 --foo 2. You can use a custom delimiter by using the multiple_sep property. false by default.
Make the value for this argument required. If set to true, an error will be produced if no value was provided. false by default.
A file type argument has a string value that points to a file or folder path.
The name of the argument. Can be in the formats --foo, -f or foo. The number of dashes determines how values can be passed:
--foois a long option, which can be passed withexecutable_name --foo=valueorexecutable_name --foo value-fis a short option, which can be passed withexecutable_name -f valuefoois an argument, which can be passed withexecutable_name value
A file type argument has a string value that points to a file or folder path.
If the output filename is a path and it does not exist, create it before executing the script (only for direction: output).
A clean version of the argument's name. This is only used for documentation.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Structured information. Can be any shape: a string, vector, map or even nested map.
Checks whether the file or folder exists. For input files, this check will happen before the execution of the script, while for output files the check will happen afterwards.
A one-sentence summary of the argument. This is only used for documentation.
A description of the argument. This is only used for documentation. Multiline descriptions are supported.
The delimiter character for providing multiple values. : by default.
Allow for multiple values (false by default).
For input arguments, this will be treated as a list of values. For example, values
can be passed using the delimiter --foo=1:2:3 or by providing the same argument
multiple times --foo 1 --foo 2. You can use a custom delimiter by using the
multiple_sep property.
For output file arguments, the passed value needs to contain a wildcard. For example,
--foo 'foo_*.txt' will be treated as a list of files that match the pattern. Note that in Bash,
the wildcard will need to be in quotes ("foo_*.txt" or 'foo_*.txt') or else Bash will
automatically attempt to expand the expression.
Other output arguments (e.g. integer, double, ...) are not supported yet.
Make the value for this argument required. If set to true, an error will be produced if no value was provided. false by default.
An integer type argument has a numeric value without decimal points.
The name of the argument. Can be in the formats --foo, -f or foo. The number of dashes determines how values can be passed:
--foois a long option, which can be passed withexecutable_name --foo=valueorexecutable_name --foo value-fis a short option, which can be passed withexecutable_name -f valuefoois an argument, which can be passed withexecutable_name value
An integer type argument has a numeric value without decimal points.
Limit the amount of valid values for this argument to those set in this list. When set and a value not present in the list is provided, an error will be produced.
A clean version of the argument's name. This is only used for documentation.
Structured information. Can be any shape: a string, vector, map or even nested map.
Maximum allowed value for this argument. If set and the provided value is higher than the maximum, an error will be produced. Can be combined with min to clamp values.
A one-sentence summary of the argument. This is only used for documentation.
A description of the argument. This is only used for documentation. Multiline descriptions are supported.
The delimiter character for providing multiple values. : by default.
Minimum allowed value for this argument. If set and the provided value is lower than the minimum, an error will be produced. Can be combined with max to clamp values.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Treat the argument value as an array. Arrays can be passed using the delimiter --foo=1:2:3 or by providing the same argument multiple times --foo 1 --foo 2. You can use a custom delimiter by using the multiple_sep property. false by default.
Make the value for this argument required. If set to true, an error will be produced if no value was provided. false by default.
An long type argument has a numeric value without decimal points.
The name of the argument. Can be in the formats --foo, -f or foo. The number of dashes determines how values can be passed:
--foois a long option, which can be passed withexecutable_name --foo=valueorexecutable_name --foo value-fis a short option, which can be passed withexecutable_name -f valuefoois an argument, which can be passed withexecutable_name value
An long type argument has a numeric value without decimal points.
Limit the amount of valid values for this argument to those set in this list. When set and a value not present in the list is provided, an error will be produced.
A clean version of the argument's name. This is only used for documentation.
Structured information. Can be any shape: a string, vector, map or even nested map.
Maximum allowed value for this argument. If set and the provided value is higher than the maximum, an error will be produced. Can be combined with min to clamp values.
A one-sentence summary of the argument. This is only used for documentation.
A description of the argument. This is only used for documentation. Multiline descriptions are supported.
The delimiter character for providing multiple values. : by default.
Minimum allowed value for this argument. If set and the provided value is lower than the minimum, an error will be produced. Can be combined with max to clamp values.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Treat the argument value as an array. Arrays can be passed using the delimiter --foo=1:2:3 or by providing the same argument multiple times --foo 1 --foo 2. You can use a custom delimiter by using the multiple_sep property. false by default.
Make the value for this argument required. If set to true, an error will be produced if no value was provided. false by default.
A string type argument has a value made up of an ordered sequences of characters, like "Hello" or "I'm a string".
The name of the argument. Can be in the formats --foo, -f or foo. The number of dashes determines how values can be passed:
--foois a long option, which can be passed withexecutable_name --foo=valueorexecutable_name --foo value-fis a short option, which can be passed withexecutable_name -f valuefoois an argument, which can be passed withexecutable_name value
A string type argument has a value made up of an ordered sequences of characters, like "Hello" or "I'm a string".
Limit the amount of valid values for this argument to those set in this list. When set and a value not present in the list is provided, an error will be produced.
A clean version of the argument's name. This is only used for documentation.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Structured information. Can be any shape: a string, vector, map or even nested map.
A one-sentence summary of the argument. This is only used for documentation.
A description of the argument. This is only used for documentation. Multiline descriptions are supported.
The delimiter character for providing multiple values. : by default.
Treat the argument value as an array. Arrays can be passed using the delimiter --foo=1:2:3 or by providing the same argument multiple times --foo 1 --foo 2. You can use a custom delimiter by using the multiple_sep property. false by default.
Make the value for this argument required. If set to true, an error will be produced if no value was provided. false by default.
An executable Bash script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
An executable Bash script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
An executable C# script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
An executable C# script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
An executable file.
An executable file.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
An executable JavaScript script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
An executable JavaScript script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
A Nextflow script. Work in progress; added mainly for annotation at the moment.
The name of the workflow to be wrapped.
A Nextflow script. Work in progress; added mainly for annotation at the moment.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
A plain file. This can only be used as a supporting resource for the main script or unit tests.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
A plain file. This can only be used as a supporting resource for the main script or unit tests.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
An executable Python script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
An executable Python script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
An executable R script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
An executable R script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
An executable Scala script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
An executable Scala script.
When defined in resources, only the first entry will be executed when running the built component or when running viash run.
When defined in test_resources, all entries will be executed during viash test.
The path of the input file. Can be a relative or an absolute path, or a URI. Mutually exclusive with text.
The content of the resulting file specified as a string. Mutually exclusive with path.
Whether the resulting resource file should be made executable.
Resulting filename of the resource. From within a script, the file can be accessed at meta["resources_dir"] + "/" + dest. If unspecified, dest will be set to the basename of the path parameter.
Directives are optional settings that affect the execution of the process.
The beforeScript directive allows you to execute a custom (Bash) snippet before the main process script is run. This may be useful to initialise the underlying cluster environment or for other custom initialisation.
See beforeScript.
The accelerator directive allows you to specify the hardware accelerator requirement for the task execution e.g. GPU processor.
Viash implements this directive as a map with accepted keywords: type, limit, request, and runtime.
See accelerator.
The time directive allows you to define how long a process is allowed to run.
See time.
The afterScript directive allows you to execute a custom (Bash) snippet immediately after the main process has run. This may be useful to clean up your staging area.
See afterScript.
The executor defines the underlying system where processes are executed. By default a process uses the executor defined globally in the nextflow.config file.
The executor directive allows you to configure what executor has to be used by the process, overriding the default configuration. The following values can be used:
| Name | Executor |
|---|---|
| awsbatch | The process is executed using the AWS Batch service. |
| azurebatch | The process is executed using the Azure Batch service. |
| condor | The process is executed using the HTCondor job scheduler. |
| google-lifesciences | The process is executed using the Google Genomics Pipelines service. |
| ignite | The process is executed using the Apache Ignite cluster. |
| k8s | The process is executed using the Kubernetes cluster. |
| local | The process is executed in the computer where Nextflow is launched. |
| lsf | The process is executed using the Platform LSF job scheduler. |
| moab | The process is executed using the Moab job scheduler. |
| nqsii | The process is executed using the NQSII job scheduler. |
| oge | Alias for the sge executor. |
| pbs | The process is executed using the PBS/Torque job scheduler. |
| pbspro | The process is executed using the PBS Pro job scheduler. |
| sge | The process is executed using the Sun Grid Engine / Open Grid Engine. |
| slurm | The process is executed using the SLURM job scheduler. |
| tes | The process is executed using the GA4GH TES service. |
| uge | Alias for the sge executor. |
See executor.
The disk directive allows you to define how much local disk storage the process is allowed to use.
See disk.
The tag directive allows you to associate each process execution with a custom label, so that it will be easier to identify them in the log file or in the trace execution report.
For ease of use, the default tag is set to "$id", which allows tracking the progression of the channel events through the workflow more easily.
See tag.
The machineType can be used to specify a predefined Google Compute Platform machine type when running using the Google Life Sciences executor.
See machineType.
The stageInMode directive defines how input files are staged-in to the process work directory. The following values are allowed:
| Value | Description |
|---|---|
| copy | Input files are staged in the process work directory by creating a copy. |
| link | Input files are staged in the process work directory by creating an (hard) link for each of them. |
| symlink | Input files are staged in the process work directory by creating a symbolic link with an absolute path for each of them (default). |
| rellink | Input files are staged in the process work directory by creating a symbolic link with a relative path for each of them. |
See stageInMode.
The penv directive allows you to define the parallel environment to be used when submitting a parallel task to the SGE resource manager.
See penv.
The storeDir directive allows you to define a directory that is used as a permanent cache for your process results.
See storeDir.
The errorStrategy directive allows you to define how an error condition is managed by the process. By default when an error status is returned by the executed script, the process stops immediately. This in turn forces the entire pipeline to terminate.
Table of available error strategies:
| Name | Executor |
|---|---|
terminate | Terminates the execution as soon as an error condition is reported. Pending jobs are killed (default) |
finish | Initiates an orderly pipeline shutdown when an error condition is raised, waiting the completion of any submitted job. |
ignore | Ignores processes execution errors. |
retry | Re-submit for execution a process returning an error condition. |
See errorStrategy.
The memory directive allows you to define how much memory the process is allowed to use.
See memory.
The stageOutMode directive defines how output files are staged-out from the scratch directory to the process work directory. The following values are allowed:
| Value | Description |
|---|---|
| copy | Output files are copied from the scratch directory to the work directory. |
| move | Output files are moved from the scratch directory to the work directory. |
| rsync | Output files are copied from the scratch directory to the work directory by using the rsync utility. |
See stageOutMode.
Automated processing flags which can be toggled on or off.
If true, an input tuple only containing only a single File (e.g. ["foo", file("in.h5ad")]) is automatically transformed to a map (i.e. ["foo", [ input: file("in.h5ad") ] ]).
Default: true.
If true, an output tuple containing a map with a File (e.g. ["foo", [ output: file("out.h5ad") ] ]) is automatically transformed to a map (i.e. ["foo", file("out.h5ad")]).
Default: false.
If true, the module's transcripts from work/ are automatically published to params.transcriptDir.
If not defined, params.publishDir + "/_transcripts" will be used.
Will throw an error if neither are defined.
Default: false.
Allows tweaking how the Nextflow Config file is generated.
A series of default labels to specify memory and cpu constraints.
The default memory labels are defined as "mem1gb", "mem2gb", "mem4gb", ... upto "mem512tb" and follows powers of 2. The default cpu labels are defined as "cpu1", "cpu2", "cpu5", "cpu10", ... upto "cpu1000" and follows a semi logarithmic scale (1, 2, 5 per decade).
Conceptually it is possible for a Viash Config to overwrite the full labels parameter, however likely it is more efficient to add additional labels in the Viash Package with a config mod.
Specifies a Viash component (script or executable) that should be made available for the code defined in the component. The dependency components are collected and copied to the output folder during the Viash build step.
The full name of the dependency component. This should include the namespace.
An alternative name for the dependency component. This can include a namespace if so needed.
Defines a locally present and available repository. This can be used to define components from the same code base as the current component. Alternatively, this can be used to refer to a code repository present on the local hard-drive instead of fetchable remotely, for example during development.
Defines a locally present and available repository. This can be used to define components from the same code base as the current component. Alternatively, this can be used to refer to a code repository present on the local hard-drive instead of fetchable remotely, for example during development.
Defines a subfolder of the repository to use as base to look for the dependency components.
Defines which version of the dependency component to use. Typically this can be a specific tag, branch or commit hash.
A Git repository where remote dependency components can be found.
The URI of the Git repository.
A Git repository where remote dependency components can be found.
Defines a subfolder of the repository to use as base to look for the dependency components.
Defines which version of the dependency component to use. Typically this can be a specific tag, branch or commit hash.
A GitHub repository where remote dependency components can be found.
The name of the GitHub repository.
A GitHub repository where remote dependency components can be found.
Defines a subfolder of the repository to use as base to look for the dependency components.
Defines which version of the dependency component to use. Typically this can be a specific tag, branch or commit hash.
A Viash-Hub repository where remote dependency components can be found.
The name of the Viash-Hub repository.
A Viash-Hub repository where remote dependency components can be found.
Defines a subfolder of the repository to use as base to look for the dependency components.
Defines which version of the dependency component to use. Typically this can be a specific tag, branch or commit hash.
Defines a locally present and available repository. This can be used to define components from the same code base as the current component. Alternatively, this can be used to refer to a code repository present on the local hard-drive instead of fetchable remotely, for example during development.
The identifier used to refer to this repository from dependencies.
Defines a locally present and available repository. This can be used to define components from the same code base as the current component. Alternatively, this can be used to refer to a code repository present on the local hard-drive instead of fetchable remotely, for example during development.
Defines a subfolder of the repository to use as base to look for the dependency components.
Defines which version of the dependency component to use. Typically this can be a specific tag, branch or commit hash.
A Git repository where remote dependency components can be found.
The identifier used to refer to this repository from dependencies.
The URI of the Git repository.
A Git repository where remote dependency components can be found.
Defines a subfolder of the repository to use as base to look for the dependency components.
Defines which version of the dependency component to use. Typically this can be a specific tag, branch or commit hash.
A GitHub repository where remote dependency components can be found.
The identifier used to refer to this repository from dependencies.
The name of the GitHub repository.
A GitHub repository where remote dependency components can be found.
Defines a subfolder of the repository to use as base to look for the dependency components.
Defines which version of the dependency component to use. Typically this can be a specific tag, branch or commit hash.
A Viash-Hub repository where remote dependency components can be found.
The identifier used to refer to this repository from dependencies.
The name of the Viash-Hub repository.
A Viash-Hub repository where remote dependency components can be found.
Defines a subfolder of the repository to use as base to look for the dependency components.
Defines which version of the dependency component to use. Typically this can be a specific tag, branch or commit hash.
The Docker setup strategy to use when building a container.
Makes this argument an input or an output, as in does the file/folder needs to be read or written. input by default.
Allows setting a component to active, deprecated or disabled.
Enables or disables automatic volume mapping. Enabled when set to Automatic or disabled when set to Manual. Default: Automatic
The scope of the component. public by default.