I have a directory of approx 100 fastq files. Each file name is LongString.fastq
.
Each fastq file is organized into separate lines of information. This is long-read data, so there will be hundreds to thousands of reads in each fastq file. I am working in Bash.
I need to count the character length of the sequence on every 4th line, starting from and including line 2. The inside of each file looks like this (This is shortened for example):
$ head -8 FAX08345_abafd786_a8df7914_1131.fastq
>@5741b240-bbb7-463a-a41c-fab01677d2cd
>GTGCCCTCTACTCGTTCAGTTCAGGTCCTTGCTGATGCGTCCGGCGTAGAGGATCGAGATCGATCTCGA
>+
>&&&%%%%&()*)&%%()++))(()(((((('(23///=;:::9;;<<<<<77788;;;<=A>???
>@5b0dc526-2324-400a-b89b-e4056ea82f75
>CTGTGTGACGCCTTCGTTCATTTTCGTGTGCTTGTTAGCAGCCGGATCTCAGTGGTGGTGGTGGTGGT
>+
>#$#$$$$$$$$$%'$%&&&''&&'&&&%%%%'32237@CDEGJJIKKOGIFHSSJSSSILMLSJON
I am using this command to count the read length and output into a new file.
sed -n '2~4p' FAX08345_abafd786_a8df7914_1131.fastq | awk '{ print length }' > len.text
Output file name would be:
FAX08345_abafd786_a8df7914_1131.fastq.len
Output file content would be:
69
68
Name for the second input file:
FAX08345_abafd786_a8df7914_500.fastq
New Name for second output file FAX08345_abafd786_a8df7914_500.fastq.len
Content of the second input file:
>@5741b240-bbb7-463a-a41c-fab01677d2cd
>GTGCCCTCTACTCGTTCAGTTCAGGTCCTTGCTGATGCGTCCGGCGTAGAGGATCGAGATCGATCTCGAGATCGATCTCGA
>+
>&&&%%%%&()*)&%%()++))(()(((((('(23///=;:::9;;<<<<<77788;;;<=A>???
>@5b0dc526-2324-400a-b89b-e4056ea82f75
>CTGTGTGACGCCTTCGTTCATTTTCGTGTGCTTGTTAGCAGCCGGATCTCAGTGGTGGTGGTGGTGGTGTGGT
>+
>#$#$$$$$$$$$%'$%&&&''&&'&&&%%%%'32237@CDEGJJIKKOGIFHSSJSSSILMLSJON
content in the second output file:
81
73
I need to automate this for all of the fastq files that I have in my directory. Note, in this example, I am only showing the first 8 lines of the input file, but each file will have hundreds of lines, and I need to return the number of characters in the string on every 4th line, starting from and including line 2.
2
It sounds like this might be what you’re trying to do:
awk '
FNR == 1 { close(out); out="len."FILENAME }
((FNR-2) % 4) == 0 { print length() > out }
' *.fastq
The above would not produce an output file given an input file that is empty or contains fewer than 2 lines. If that’s an issue then update your question to include such cases in your sample input/output and state there how it should be handled plus which awk variant you use (as that’d be easier to handle with GNU awk for BEGINFILE/ENDFILE
).
I’m assuming when you say “I have a directory of many fastq files” the “many” isn’t enough to exceed your systems ARG_MAX
limit. If it can be then you’d get the list of files to process by doing printf '%sn' *.fastq | awk 'BEGIN{ populate ARGV from getline calls } {...}'
(google it). If your file names can contain newlines then you’d need GNU awk or similar to handle NUL-terminated input and then printf '%s' *.fastq | awk -v RS='' 'BEGIN{ populate ARGV from
getline calls } { as above }'
I’d recommend that, given an input file foo.fastq
, instead of creating an output file named len.foo.fastq
you create one named foo.fastq.len
or foo.len
so that if you have to run the tool again or run any other tool that operates on .fastq
files the *.fastq
shell globbing doesn’t pick up your len
files. To do that, instead of out="len."FILENAME
use either of:
out=FILENAME".len"
out=FILENAME; sub(/.[^.]+$/,".len",out)
0
Assumptions:
- all input files contain at least 1 row
- all filenames are of the format
some_long_string_with_no_periods.fastq
One awk
approach:
awk -v start=2 -v inc=4 ' # provide awk with input parameters
FNR==1 { close(outf) # 1st record: close previous output file
outf = FILENAME ".len" # define output file name
i = start # reset line counter
}
FNR==i { print length($0) > outf # ith record: print length to output file
i += inc # increment line counter
}
' *.fastq
Running against OP’s 2x sample input files we get:
$ head *fastq.len
==> FAX08345_abafd786_a8df7914_1131.fastq.len <==
70
69
==> FAX08345_abafd786_a8df7914_500.fastq.len <==
82
74
NOTE: these counts are +1
what OP has shown as the expected output; I’m assuming OP’s data files are slightly different from what’s shown in the question
This might work for you (GNU sed, wc & parallel):
parallel sed ''2~4!d;s/>//;s/.*/echo -n "&"|wc -m/e'' {} >{}.len ::: *.fastq
For each fastq
file in the current directory run a sed program that outputs its result to a file taken from the input file name plus .len
.
The sed program, processes every fourth line of the file starting at line 2.
Each line processed, removes the first >
character and passes the output to another substitution command that evokes the evaluated pipeline.
This echos the line via a pipe to the wc command which counts the number of characters in the amended line.
N.B. The echo
command uses the n
option to remove any newline added.
This was the correct answer:
for file in *.fastq; do
output_file="${file}.len"
sed -n '2~4p' "$file" | awk '{ print length }' > "$output_file"
done