Using Snakemake 8.11.6 on a Slurm cluster (via the Slurm executor plugin), I’m trying to build on the following input:
Input
- a Git repository that is being cloned,
- two different
tar.gz
files retrieved from Zenodo using the the Zenodo Storage plugin.
The git repo contains two directories, say a
and b
with some input files.
One tarball (say b
) contains files with the same name as in the git directory b
,
but the other tarball (say a
) contains differently named files.
Issue only when running remotely
Running this on a Slurm cluster, all works fine for the Git repo and one of the two Zenodo tarballs, but I’m having issues with expanding files extracted from the second tarball, where each wildcard rule apparently tries to retrieve the tarball again from Zenodo, which fails as I run into 429s.
Weirdly, all works fine and the workflow completes successfully when running locally on my laptop!
Details
Here’s my simplified workflow (with unnecessary details redacted):
checkpoint retrieve_data:
input:
a=storage.zenodo("zenodo://record/12345/a.tar.gz"),
b=storage.zenodo("zenodo://record/678910/b.tar.gz"),
output:
a=directory("resources/a"),
b=directory("resources/b"),
git=directory("resources/git"),
shell:
"mkdir -p {output.a} && tar -xzvf {input.a} -C {output.a} && "
"mkdir -p {output.b} && tar -xzvf {input.b} -C {output.b} && "
"git clone --depth 1 -b my-branch --single-branch <git-url> {output.git}"
rule work_a:
input:
input_json='resources/git/a/{file_name}.json',
zenodo_files=[f"resources/a/{i}.json" for i in range(1,10)]
output:
<output-files>
script:
'scripts/work_a.py'
rule work_b:
input:
input_json='resources/git/b/{file_name}.json',
zenodo_files=["resources/b/{file_name}.json"]
output:
<output-files>
script:
'scripts/work_b.py'
def _expand(wildcards) -> list[str]:
git_dir = checkpoints.retrieve_data.get(**wildcards).output['git']
file_names_a = glob_wildcards(os.path.join(git_dir,'a/{file_name}.json')).file_name
file_names_b = glob_wildcards(os.path.join(git_dir,'b/{file_name}.json')).file_name
return (
expand(
f'{RESOURCES_DIR}a/{{file_name}}.json',file_name=file_names_a
) + expand(
f'{RESOURCES_DIR}b/{{file_name}}.json',file_name=file_names_b
)
)
rule all:
input:
_expand
default_target: True
An exemplary log showing the error looks like this:
Injecting conda environment workflow/envs/global.yaml.
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided remote nodes: 1
Provided resources: ...
Select jobs to execute...
Execute 1 jobs...
[Mon Jun 10 14:07:00 2024]
rule work_a:
input: git/a/102938.json, resources/a/2.json, resources/a/8.json, resources/a/3.json, resources/a/4.json, resources/a/7.json, resources/a/6.json, resources/a/5.json, resources/a/1.json, resources/a/9.json
output: <output-files>
jobid: 0
reason: Forced execution
wildcards: file_name=102938
resources: ...
Injecting conda environment workflow/envs/global.yaml.
Building DAG of jobs...
429 Client Error: TOO MANY REQUESTS for url: https://zenodo.org/record/12345?token=<TOKEN>, attempt 1/3 failed - retrying in 3 seconds...
429 Client Error: TOO MANY REQUESTS for url: https://zenodo.org/record/12345?token=<TOKEN>, attempt 2/3 failed - retrying in 6 seconds...
429 Client Error: TOO MANY REQUESTS for url: https://zenodo.org/record/12345?token=<TOKEN>, attempt 3/3 failed - giving up!
WorkflowError:
Failed to check existence of zenodo://record/12345/a.tar.gz
HTTPError: 429 Client Error: TOO MANY REQUESTS for url: https://zenodo.org/record/12345?token=<TOKEN>
srun: error: <NODE>: task 0: Exited with exit code 1
Error in rule work_a:
jobid: 0
input: git/a/102938.json, resources/a/2.json, resources/a/8.json, resources/a/3.json, resources/a/4.json, resources/a/7.json, resources/a/6.json, resources/a/5.json, resources/a/1.json, resources/a/9.json
output: <output-files>
log: logs/work_a/work_a-102938.log (check log file(s) for error details)
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Storing output in storage.
WorkflowError:
At least one job did not complete successfully.
In contrast, an exemplary log file for the b
rule looks like this:
Injecting conda environment workflow/envs/global.yaml.
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided remote nodes: 1
Provided resources: ...
Select jobs to execute...
Execute 1 jobs...
[Mon Jun 10 14:06:43 2024]
rule work_b:
input: git/b/293847.json, resources/b/293847.json
output: <output-files>
jobid: 0
reason: Forced execution
wildcards: file_name=293847
resources: ...
Injecting conda environment workflow/envs/global.yaml.
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Provided resources: ...
Select jobs to execute...
Execute 1 jobs...
localrule work_b:
input: git/b/293847.json, resources/b/293847.json
output: <output-files>
jobid: 0
reason: Forced execution
wildcards: file_name=293847
resources: ...
Finished job 0.
1 of 1 steps (100%) done
Storing output in storage.
Finished job 0.
1 of 1 steps (100%) done
Storing output in storage.
I’m at a loss to understand why work_a
would try to retrieve the Zenodo file over and over again (there’s a long and growing list of FAILED
messages for this rule), but work_b
works just fine.
Like I said, the workflow runs absolutely smoothly locally (snakemake -c 8 --keep-storage-local-copies --sdm conda --configfile config/config.yml --directory ~/tmp/something
), but fails with the above errors when running on the Slurm cluster (nohup snakemake --profile <profile> --keep-storage-local-copies --sdm conda --configfile config/config.yml --directory /scratch/user/something &
).