nf-core/pathogensurveillance
Surveillance of pathogens using population genomics and sequencing
Define where the pipeline should find input data and save output data.
Path to comma-separated file containing information about samples.
string
^\S+\.[ct]sv$
This CSV has one row per samples and contains information such as the location of input files, sample ID, labels, etc. Use this parameter to specify its location. See the documentaion for details on formatting this file.
Path to comma-separated file containing information about samples.
string
^\S+\.[ct]sv$
This CSV has one row per reference and contains information such as the location of input files, reference ID, labels, etc. Use this parameter to specify its location. See the documentaion for details on formatting this file.
The output directory where the results will be saved. You have to use absolute paths to storage if running on Cloud infrastructure.
string
The location to save temporary files for processes. This is only used for some processes that produce large temporary files such as PICARD_SORTSAM.
string
The location to save downloaded files for later use. This is seperate from the cached data (usually stored in the 'work' directory), so that the cache can be cleared without having to repeat many large downloads.
string
path_surveil_data
Email address for completion summary.
string
^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$
Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (~/.nextflow/config
) then you don't need to specify this on the command line for every run.
MultiQC report title. Printed as page header, used for filename if not otherwise specified.
string
The path to the Bakta database folder. This or --download_bakta_db must be included.
string
Download the database required for running Bakta. This or --bakta_db must be included. Note that this will download gigabytes of information, so if you are planning to do repeated runs without --resume it would be better to download the database manually according to the bakta documentaion and specify it with --bakta_db.
boolean
Which type of the Bakta database to download. Must be 'light' (~2Gb) or 'full' (~40Gb).
string
light
light|full
Which type of caching to perform. Possible values include 'lenient', 'deep', 'true', and 'false'. 'lenient' caching does not take into account file modifications times and 'deep' takes into account file content. See https://www.nextflow.io/docs/latest/process.html#process-cache for more information.
string
true
lenient|deep|false|true
Parmaters that modify the analysis done by the pipleine.
Maximum depth of reads to be used for all analses. Samples with more reads are subsampled to this depth.
number
100
When selecting references automatically, only consider references with names that appear to be standard latin bionomials (i.e. no numbers or symbols in the first two words).
boolean
The maximum number/percentage of references representing unique subspecies to download from RefSeq for each sample. Samples with similar initial indentifications will usually use the same references, so the total number of references downloaded for a goup of samples will depend on the taxonomic diversity of the samples.
number
30
The maximum number/percentage of references representing unique species to download from RefSeq for each sample. Samples with similar initial indentifications will usually use the same references, so the total number of references downloaded for a goup of samples will depend on the taxonomic diversity of the samples.
number
20
The maximum number/percentage of references representing unique genera to download from RefSeq for each sample. Samples with similar initial indentifications will usually use the same references, so the total number of references downloaded for a goup of samples will depend on the taxonomic diversity of the samples.
number
10
The number of references most similar to each sample based on estimated ANI to include in phyogenetic anlyses.
number
3
Same as the 'n_ref_closest' option except that it only applies to referneces with what apppear to be standard latin binomaial names (i.e. two words with no numbers or symbols). This is intended to ensure that a refernece with an informative name is present even if it is not the most similar.
number
2
The number of references representing the entire range of ANI relative to each sample. These are meant to provide context for more similar references. For a group of samples, the fewest total references will be selected that satisify this count for each sample.
number
7
The minimum number of genes needed to conduct a core gene phylogeny. Samples and references will be removed (as allowed by the min_core_samps
and min_core_refs
options) until this minimum is met.
number
10
The maximum number of genes used to conduct a core gene phylogeny.
number
200
The minimum ANI between a sample and potential reference for that reference to be used for mapping reads from that sample. To force all the samples in a report group to use the same reference, set this value very low.
number
0.85
Parameters used to describe centralised config profiles. These should not be edited.
Git commit id for Institutional configs.
string
master
Base directory for Institutional configs.
string
https://raw.githubusercontent.com/nf-core/configs/master
If you're running offline, Nextflow will not be able to fetch the institutional config files from the internet. If you don't need them, then this is not a problem. If you do need them, you should download the files from the repo and tell Nextflow where to find them with this parameter.
Institutional config name.
string
Institutional config description.
string
Institutional config contact information.
string
Institutional config URL link.
string
Set the top limit for requested resources for any single job.
Maximum number of CPUs that can be requested for any single job.
integer
16
Use to set an upper-limit for the CPU requirement for each process. Should be an integer e.g. --max_cpus 1
Maximum amount of memory that can be requested for any single job.
string
64.GB
^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$
Use to set an upper-limit for the memory requirement for each process. Should be a string in the format integer-unit e.g. --max_memory '8.GB'
Maximum amount of time that can be requested for any single job.
string
240.h
^(\d+\.?\s*(s|m|h|day)\s*)+$
Use to set an upper-limit for the time requirement for each process. Should be a string in the format integer-unit e.g. --max_time '2.h'
Maximum number of CPUs that can be requested for all jobs combined. Should be an integer e.g. --max_total_cpus 1
. Only applies if running the pipeline in a personal computer.
integer
Use to set an upper-limit for the CPU requirement for all jobs combined. Should be an integer e.g. --max_total_cpus 1
Maximum amount of memory that can be requested for all jobs combined. Should be a string in the format integer-unit e.g. --max_total_memory '8.GB'
. Only applies if running the pipeline in a personal computer.
string
^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$
Use to set an upper-limit for the memory requirement for all jobs combined. Should be a string in the format integer-unit e.g. --max_total_memory '8.GB'
Maximum number of jobs that can run at once. Should be an integer e.g. --max_total_jobs 1
integer
Use to set an upper-limit for the jobs to schedule at once. Should be an integer e.g. --max_total_jobs 1
Less common options for the pipeline, typically set in a config file.
Display version and exit.
boolean
Method used to save pipeline results to output directory.
string
The Nextflow publishDir
option specifies which intermediate files should be saved to the output directory. This option tells the pipeline what method should be used to move these files. See Nextflow docs for details.
Designates which files are copied from work/ directory
string
Sets publishDir
mode for individual files. Storage footprint of the pipeline can be quite large, and files can be saved twice: both within the work/ directory and within the published output directory. By default, this parameter is set so that intermediate files will be linked from the published directory to their location in the work/ directory instead of being stored twice.
Email address for completion summary, only when pipeline fails.
string
^([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,5})$
An email address to send a summary email to when the pipeline is completed - ONLY sent if the pipeline does not exit successfully.
Send plain-text email instead of HTML.
boolean
File size limit when attaching MultiQC reports to summary emails.
string
25.MB
^\d+(\.\d+)?\.?\s*(K|M|G|T)?B$
Do not use coloured log outputs.
boolean
Incoming hook URL for messaging service
string
Incoming hook URL for messaging service. Currently, MS Teams and Slack are supported.
Custom config file to supply to MultiQC.
string
Custom logo file to supply to MultiQC. File name must also be set in the MultiQC config file
string
Custom MultiQC yaml file containing HTML including a methods description.
string
Directory to keep pipeline Nextflow logs and reports.
string
${params.outdir}/pipeline_info
Boolean whether to validate parameters against the schema at runtime
boolean
true
Show all params when using --help
boolean
Run this workflow with Conda. You can also use '-profile conda' instead of providing this parameter.
boolean
Name of queue in HPC environment to run jobs.
string
Base URL or local path to location of pipeline test dataset files
string
https://raw.githubusercontent.com/nf-core/test-datasets/