Link to mypy: https://mypy-play.net/?mypy=latest&python=3.12&gist=a4da5db5bfbdf1e6bddce442286cc843
More often than not I find myself connecting to multiple APIs and then collecting the results. The requests to multiple APIs can be made in parallel.
from typing import Dict, List, TypeVar, Callable, Tuple, overload
from concurrent.futures import ThreadPoolExecutor
def get_airflow() -> str:
return "a"
def get_prometheus() -> Dict[str, str]:
return {"b": "c"}
def get_zabbix() -> List[str]:
return ["d", "e"]
adata = get_airflow()
pdata = get_prometheus()
zdata = get_zabbix()
I can parallelize it using ThreadPoolExecutor:
exe = ThreadPoolExecutor()
arr = [
exe.submit(get_airflow),
exe.submit(get_prometheus),
exe.submit(get_zabbix),
]
adata = arr[0].result()
pdata = arr[1].result()
zdata = arr[2].result()
However, this design requites that I keep a separate list of submits and then separate list of variable assignments and I have to keep those lists in sync and don’t mix. Is there a way I could write it better? Something along:
adata, pdata, zdata = paralelize(get_airflow, get_prometheus, get_zabbix)
How to write such a function to preserve typing information? I tried the following, but I do not understand how to preserve types or construct a tuple with dynamic number of elements:
T1 = TypeVar('T1')
T2 = TypeVar('T2')
T3 = TypeVar('T3')
@overload
def parallelize(a: Callable[[], T1]) -> Tuple[T1]: ...
@overload
def parallelize(a: Callable[[], T1], b: Callable[[], T2]) -> Tuple[T1, T2]: ...
@overload
def parallelize(a: Callable[[], T1], b: Callable[[], T2], c: Callable[[], T3]) -> Tuple[T1, T2, T3]: ...
def parallelize(*cbs: Callable[[], Any]) -> Tuple[Any]:
with ThreadPoolExecutor() as exe:
return (x.result() for x in [exe.submit(cb) for cb in cbs])
How to write such a function to preserve typing information? Is there a better way to do it?