If I run a snakemake workflow, the start time for each job is logged to the console, for example
[Fri Jul 12 10:39:15 2024]
How can I adapt the logging to also show the duration for each job and the total duration in ms?
a) Is there some logging config file, where I could adapt a format string etc.?
b) Is there some profiling tool for snakemake, that helps to monitor and optimize the performance of snakemake workflows?
My use case is to compare different solvers for pypsa-eur and find out more about their limitations. I could manually subtract the start times of the jobs but that does not make sense to me.
Assuming unrestricted shared filesystem usage.
Building DAG of jobs...
Provided cores: 8
Rules claiming more threads will be scaled down.
Job stats:
job count
-------- -------
run_main 1
total 1
Select jobs to execute...
Execute 1 jobs...
[Fri Jul 12 10:39:15 2024]
localrule run_main:
input: input/input.txt
output: output/output.txt
jobid: 0
reason: Code has changed since last execution
resources: tmpdir=C:UserseisAppDataLocalTempPyCharmPortableTemp
Running snake_script.py at C:python_envworkspaceresilientsnakemake_demo
Processed data written to output/output.txt
[Fri Jul 12 10:39:15 2024]
Finished job 0.
1 of 1 steps (100%) done