diff --git a/docs/aws.rst b/docs/aws.rst index b73266a998..510f7bbccc 100644 --- a/docs/aws.rst +++ b/docs/aws.rst @@ -405,7 +405,7 @@ of a specific job e.g. to define custom mount paths or other Batch Job special s To do that first create a *Job Definition* in the AWS Console (or with other means). Note the name of the *Job Definition* you created. You can then associate a process execution with this *Job definition* by using the :ref:`process-container` -directive and specifing, in place of the container image name, the Job definition name prefixed by the +directive and specifying, in place of the container image name, the Job definition name prefixed by the ``job-definition://`` string, as shown below:: process.container = 'job-definition://your-job-definition-name' diff --git a/docs/cli.rst b/docs/cli.rst index 185a20da18..7e63517dea 100644 --- a/docs/cli.rst +++ b/docs/cli.rst @@ -934,7 +934,7 @@ execution metadata. +---------------------------+------------+--------------------------------------------------------------------------------+ | -fields, -f | | Comma separated list of fields to include in the printed log. | +---------------------------+------------+--------------------------------------------------------------------------------+ -| -filter, -F | | Filter log entires by a custom expression | +| -filter, -F | | Filter log entries by a custom expression | | | | e.g. ``process =~ /foo.*/ && status == 'COMPLETED'`` | +---------------------------+------------+--------------------------------------------------------------------------------+ | -help, -h | false | Print the command usage. | diff --git a/docs/config.rst b/docs/config.rst index b594b6d770..3be4e6b518 100644 --- a/docs/config.rst +++ b/docs/config.rst @@ -167,7 +167,7 @@ Name Description cliPath The path where the AWS command line tool is installed in the host AMI. jobRole The AWS Job Role ARN that needs to be used to execute the Batch Job. logsGroup The name of the logs group used by Batch Jobs (default: ``/aws/batch``, requires ``22.09.0-edge`` or later). -volumes One or more container mounts. Mounts can be specified as simple e.g. `/some/path` or canonical format e.g. ``/host/path:/mount/path[:ro|rw]``. Multiple mounts can be specifid separating them with a comma or using a list object. +volumes One or more container mounts. Mounts can be specified as simple e.g. `/some/path` or canonical format e.g. ``/host/path:/mount/path[:ro|rw]``. Multiple mounts can be specified separating them with a comma or using a list object. delayBetweenAttempts Delay between download attempts from S3 (default `10 sec`). maxParallelTransfers Max parallel upload/download transfer operations *per job* (default: ``4``). maxTransferAttempts Max number of downloads attempts from S3 (default: `1`). diff --git a/docs/container.rst b/docs/container.rst index fd1ded52dd..a901ce06ca 100644 --- a/docs/container.rst +++ b/docs/container.rst @@ -455,7 +455,7 @@ Multiple containers It is possible to specify a different Singularity image for each process definition in your pipeline script. For example, let's suppose you have two processes named ``foo`` and ``bar``. You can specify two different Singularity images -specifing them in the ``nextflow.config`` file as shown below:: +specifying them in the ``nextflow.config`` file as shown below:: process { withName:foo { diff --git a/docs/dsl2.rst b/docs/dsl2.rst index b8e4cf842b..f2ebd41bae 100644 --- a/docs/dsl2.rst +++ b/docs/dsl2.rst @@ -510,7 +510,7 @@ Finally, we have a third project B with a workflow that includes again P1 and P2 └-main.nf With the possibility to keep the template files inside the project L, A and B can use the modules defined in L without any changes. -A future prject C would do the same, just cloning L (if not available on the system) and including its module script. +A future project C would do the same, just cloning L (if not available on the system) and including its module script. Beside promoting sharing modules across pipelines, there are several advantages in keeping the module template under the script path: diff --git a/docs/faq.rst b/docs/faq.rst index 007a1fa7d2..83149a90f0 100644 --- a/docs/faq.rst +++ b/docs/faq.rst @@ -94,7 +94,7 @@ and ``datasetFile``): In our example above would now have the folder ``broccoli`` in the results directory which would contain the file ``broccoli.aln``. -If the input file has multiple extensions (e.g. ``brocolli.tar.gz``), you will want to use +If the input file has multiple extensions (e.g. ``broccoli.tar.gz``), you will want to use ``file.simpleName`` instead, to strip all of them. diff --git a/docs/flux.rst b/docs/flux.rst index 544807f985..c55cbd44f4 100644 --- a/docs/flux.rst +++ b/docs/flux.rst @@ -68,7 +68,7 @@ Here is an a demo workflow ``demo.nf`` of a job we want to run! } We will be using these files to run our test workflow. Next, assuming you don't have one handy, -let's set up an envrionment with Flux. +let's set up an environment with Flux. Container Environment --------------------- diff --git a/docs/operator.rst b/docs/operator.rst index 828881bb7a..dffcbc9dab 100644 --- a/docs/operator.rst +++ b/docs/operator.rst @@ -1535,7 +1535,7 @@ It prints the following output:: result = 15 .. tip:: - A common use case for this operator is to use the first paramter as an `accumulator` + A common use case for this operator is to use the first parameter as an `accumulator` the second parameter as the `i-th` item to be processed. Optionally you can specify a `seed` value in order to initialise the accumulator parameter @@ -1812,7 +1812,7 @@ the required fields, or just specify ``record: true`` as in the example shown be .view { record -> record.readHeader } Finally the ``splitFastq`` operator is able to split paired-end read pair FASTQ files. It must be applied to a channel -which emits tuples containing at least two elements that are the files to be splitted. For example:: +which emits tuples containing at least two elements that are the files to be split. For example:: Channel .fromFilePairs('/my/data/SRR*_{1,2}.fastq', flat: true) @@ -1833,7 +1833,7 @@ Available parameters: Field Description =========== ============================ by Defines the number of *reads* in each `chunk` (default: ``1``) -pe When ``true`` splits paired-end read files, therefore items emitted by the source channel must be tuples in which at least two elements are the read-pair files to be splitted. +pe When ``true`` splits paired-end read files, therefore items emitted by the source channel must be tuples in which at least two elements are the read-pair files to be split. limit Limits the number of retrieved *reads* for each file to the specified value. record Parse each entry in the FASTQ file as record objects (see following table for accepted values) charset Parse the content by using the specified charset e.g. ``UTF-8`` diff --git a/docs/plugins.rst b/docs/plugins.rst index fa5b5cb7bb..ff8ba2c0bd 100644 --- a/docs/plugins.rst +++ b/docs/plugins.rst @@ -41,7 +41,7 @@ Alternatively, plugins can be required using the ``-plugins`` command line optio nextflow run -plugins nf-hello@0.1.0 Multiple plugins can be specified by separating them with a comma. -When specifiying plugins via the command line, any plugin declarations in the Nextflow config file are ignored. +When specifying plugins via the command line, any plugin declarations in the Nextflow config file are ignored. Index @@ -95,7 +95,7 @@ And this function can be used by the pipeline:: channel.of( reverseString('hi') ) -The above snipped includes a function from the plugin and allows the channel to call it directly. +The above snippet includes a function from the plugin and allows the channel to call it directly. In the same way as operators, functions can be aliased:: diff --git a/docs/process.rst b/docs/process.rst index d9cad7080a..80821a575a 100644 --- a/docs/process.rst +++ b/docs/process.rst @@ -749,7 +749,7 @@ each time a new value is received. For example:: workflow { sequences = Channel.fromPath('*.fa') - methods = ['regular', 'expresso', 'psicoffee'] + methods = ['regular', 'espresso', 'psicoffee'] alignSequences(sequences, methods) } @@ -774,14 +774,14 @@ Input repeaters can be applied to files as well. For example:: workflow { sequences = Channel.fromPath('*.fa') - methods = ['regular', 'expresso'] + methods = ['regular', 'espresso'] libraries = [ file('PQ001.lib'), file('PQ002.lib'), file('PQ003.lib') ] alignSequences(sequences, methods, libraries) } In the above example, each sequence input file emitted by the ``sequences`` channel triggers six alignment tasks, -three with the ``regular`` method against each library file, and three with the ``expresso`` method. +three with the ``regular`` method against each library file, and three with the ``espresso`` method. .. note:: When multiple repeaters are defined, the process is executed for each *combination* of them. diff --git a/docs/script.rst b/docs/script.rst index cc88c79aa3..6e029d5ce6 100644 --- a/docs/script.rst +++ b/docs/script.rst @@ -221,7 +221,7 @@ Name Description ``launchDir`` The directory where the workflow is run (requires version ``20.04.0`` or later). ``moduleDir`` The directory where a module script is located for DSL2 modules or the same as ``projectDir`` for a non-module script (requires version ``20.04.0`` or later). ``nextflow`` Dictionary like object representing nextflow runtime information (see :ref:`metadata-nextflow`). -``params`` Dictionary like object holding workflow parameters specifing in the config file or as command line options. +``params`` Dictionary like object holding workflow parameters specifying in the config file or as command line options. ``projectDir`` The directory where the main script is located (requires version ``20.04.0`` or later). ``workDir`` The directory where tasks temporary files are created. ``workflow`` Dictionary like object representing workflow runtime information (see :ref:`metadata-workflow`).