PROBLEM
When I use snakemake with a cluster profile and leave it running overnight, trying both with a screen
process or with a nohup snakemake ... &
detachment, I will run into the following error below for each currently running rule/job:
The job status query failed with command: sacct -X --parsable2 --noheader --format=JobIdRaw,State --starttime 2024-06-06T03:00 --endtime now --name jobname1
Error message: sacct: error: slurm_persist_conn_open_without_init: failed to open persistent connection to host:hostname: Connection refused
sacct: error: Sending PersistInit msg: Connection refused
sacct: error: Problem talking to the database: Connection refused
The job status query failed with command: sacct -X --parsable2 --noheader --format=JobIdRaw,State --starttime 2024-06-06T03:00 --endtime now --name jobname2
Error message: sacct: error: slurm_persist_conn_open_without_init: failed to open persistent connection to host:hostname: Connection refused
sacct: error: Sending PersistInit msg: Connection refused
sacct: error: Problem talking to the database: Connection refused
The job status query failed with command: sacct -X --parsable2 --noheader --format=JobIdRaw,State --starttime 2024-06-06T03:00 --endtime now --name jobname3
Error message: sacct: error: slurm_persist_conn_open_without_init: failed to open persistent connection to host:hostname: Connection refused
sacct: error: Sending PersistInit msg: Connection refused
sacct: error: Problem talking to the database: Connection refused
Snakemake version: 8.11.0
Snakemake Slurm Executor Plugin version: 0.5.0
Below is the configuration profile being used to run Snakemake with the Slurm plugin:
executor: slurm
jobs: 20
retries: 3
rerun-incomplete: true
rerun-triggers:
- mtime
resources:
- threads=150
- mem_mb=350000
default-resources:
- slurm_account=my-acct
- slurm_partition=my-partition
- mem_mb=8000*attempt
- tmpdir="/path/to/my/tmpdir"
set-resources:
big_rule: &id001
mem_mb: 64000*attempt
another_big_rule: *id001
more_big_rule: *id001
Noticably, this error has occurred multiple times in the past, and these jobs always fail at 3:00AM of the following morning. Note this line in the error statement:
sacct -X --parsable2 --noheader --format=JobIdRaw,State --starttime **2024-06-06T03:00** --endtime now --name jobname1
Also of note, an IT representative I’ve communicated with from our HPC team noted that they have had success with an overnight Nextflow workflow using screen. I have since tried their recommendation using screen
, but again encountered the error above.
ATTEMPTED SOLUTIONS
I have run the same workflow using a “local” profile, using a high-resource interactive node on the same HPC, to confirm that the workflow completes as normal outside of a Slurm environment.
The following Github commit indicates that this problem was addressed in release 0.1.3 of the Snakemake Slurm Executor Plugin: https://github.com/snakemake/snakemake-executor-plugin-slurm/pull/5 . Yet, the issue persists with my later version.
QUESTION
I would like to communicate with my HPC IT team educated so that I can be respectful of their time and effort. Is this problem likely to be caused by Snakemake, or is it more likely to do with how my institution’s HPC is configured and how Snakemake interacts with it? Or is there more information that I could provide that could help me pinpoint the cause of this issue?