Assume I have a library "lib"
that logs some things via the logging module. I.e. there is a file ./lib/func.py
that’s somewhat like
import logging
func_logger = logging.getLogger(__name__)
async def f(x):
func_logger.info(f"calculating f({x})...")
f_x = ...
func_logger.info(f"f({x}) = {f_x}")
return f_x
and there is a ./main.py
that’s somewhat like
import logging
from lib.func import f
async def main():
return await asyncio.gather(f("a"), f("b"))
if __name__ == "__main__":
asyncio.run(main())
(Technically, I’m calling other functions that call something like f
repeatedly, but I don’t think this should matter here.)
Now I can log all of this to a single file by changing my main()
to
async def main():
lib_logger = logging.getLogger("lib") # this catches `func_logger`'s logs since its name starts with "lib."
lib_logger.setLevel(logging.DEBUG)
handler = logging.FileHandler("my_logs.log")
handler.setLevel(logging.DEBUG)
lib_logger.addHandler(handler)
return await asyncio.gather(f("a"), f("b"))
but the logs will be all over the place.
So my question is: Is there a way to log f
‘s logs ("a.log"
and "b.log"
, say) into separate files for the different parallel calls without passing around a metadata object and changing the code in the lib folder?
Using context managers or setting logging variables globally may work if I ran tasks sequentially, but these approaches don’t seem to work if everything is supposed to run in parallel.
Peter is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.