I am building a snakemake pipeline for some bioinformatics analyses, and I’m a beginner with the tool. The end users will be mainly biologists with little to no IT training, so I’m trying to make it quite user-friendly, in particular not needing much information in the config file (a previous bioinformatician in the institute had built a more robust pipeline but that required a lot of information in the config file, and it fell into disuse).
One rule that I would like to implement is to autodetect what .fastq
(raw data) files are given in their specific directory, align them all and run some QC steps. In particular, deepTools has a plotFingerprint tool that compares the distribution of data in a control data file to the distribution in the treatment data files. For this, I would like to be able to autodetect which batches of data files go together as well.
My file architecture is set up like so: DATA/<FILE TYPE>/<EXP NAME>/<data files>
, so for example DATA/FASTQ/CTCF_H3K9ac/
contains:
CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
CTCF_T7_neg_2.fq.gz
CTCF_T7_neg_3.fq.gz
CTCF_T7_pos_2.fq.gz
CTCF_T7_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz
H3K9ac_T7_neg_2.fq.gz
H3K9ac_T7_neg_3.fq.gz
H3K9ac_T7_pos_2.fq.gz
H3K9ac_T7_pos_3.fq.gz
Input_T1_pos.fq.gz
Input_T7_neg.fq.gz
Input_T7_pos.fq.gz
For those not familiar with ChIP-seq, each Input
file is a control data file for normalisation, and CTCF
and H3K9ac
are experimental data to be normalised. So one batch of files I would like to process and then send to plotFingerprint
would be
Input_T1_pos.fq.gz
CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz
With that in mind, I would need to give to my bamFingerprint
snakemake rule the path to the aligned versions of those files, i.e.
DATA/BAM/CTCF_H3K9ac/Input_T1_pos.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_3.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_3.bam
(I would also need each of those files indexed, so all of those again with the .bai
suffix for the snakemake input, but that’s trivial once I’ve managed to get all the .bam
paths. The snakemake rules I have to get up to that point all work, I’ve tested them independantly.)
Originally I had tried using list comprehensions to create the list of input files, using this:
def exps_from_inp(ifile): # not needed?
path, fname = ifile.split("Input")
conds, ftype = fname.split(".", 1)
return [f for f in glob.glob(path+"*"+conds+"*."+ftype)]
def bam_name_from_fq_name(fqpath, suffix=""):
if re.search("filtered", fqpath) :
return # need to remove files that were already filtered that could be in the same dir
else:
return fqpath.replace("FASTQ", "BAM").replace(".fq.gz", ".bam") + suffix
rule bamFingerprint:
input:
bam=[bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
bai=[bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
...
Those list comprehensions generated the correct list of files when I tried them in python, using the values that expdir
and expconds
take when I dry run the pipeline. However, during that dry run, the {input.bam}
wildcard in the shell command never gets assigned a value.
I went digging in the docs and found this page which implies that snakemake does not handle list comprehensions, and the expand
function is its replacement. In my case, the experiment numbers (the _2
and _3
in the file names) are pretty variable, they’re sometimes just random numbers, some experiments have 2 reps and some have 3, … All these factors mean that using expand
without a lot of additional considerations would be tricky (for the rep number, finding the experiment names would be fairly easy).
I then tried wrapping the list comprehensions in a function and running those in the input of my rule, but those failed, as did wrapping those function in one big one and using unpack
(although I could be using that wrong, I’m not entirely sure I understood how unpack
works).
def get_fingerprint_bam_inputfiles(wildcards):
return {"bams": get_fingerprint_bam_bams(wildcards),
"bais": get_fingerprint_bam_bais(wildcards)}
def get_fingerprint_bam_bams(wildcards):
[bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]
def get_fingerprint_bam_bais(wildcards):
[bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]
rule bamFingerprint:
input:
bams=get_fingerprint_bam_bams
bais=get_fingerprint_bam_bais
...
rule bamFingerprint_unpack:
input:
unpack(get_fingerprint_bam_inputfiles)
...
So now I’m feeling pretty stuck in this approach. How can I autodetect these experiment batches and give the correct bam file paths to my bamFingerprint
rule?