I have a 5 workers (“Processors”) that are supposed to process a 100 arguments (maybe in some sort of Queue ?), using a specific method (process()
). I want the 5 “processors” to be executed in parallels. I reasearched both concurrent.futures
and multiprocessing
, but cannot find any example like this
<code>import time
import numpy as np
class Processor :
def __init__(self, name) :
self.name = name
def process(self, arg) :
print(f'{name} : processing {arg}...')
time.sleep(arg)
l_processors = [Processor(f'Processor_{i}') for i in range(5)]
l_arguments = list(range(100))
np.random.shuffle(l_arguments)
# ... what to write beyond this point ?
</code>
<code>import time
import numpy as np
class Processor :
def __init__(self, name) :
self.name = name
def process(self, arg) :
print(f'{name} : processing {arg}...')
time.sleep(arg)
l_processors = [Processor(f'Processor_{i}') for i in range(5)]
l_arguments = list(range(100))
np.random.shuffle(l_arguments)
# ... what to write beyond this point ?
</code>
import time
import numpy as np
class Processor :
def __init__(self, name) :
self.name = name
def process(self, arg) :
print(f'{name} : processing {arg}...')
time.sleep(arg)
l_processors = [Processor(f'Processor_{i}') for i in range(5)]
l_arguments = list(range(100))
np.random.shuffle(l_arguments)
# ... what to write beyond this point ?
Any idea ?
Thanks in advance for any answer.
PS : I have to use these 5 Processor objects, I cannot use a already-made all-wrapped-up “ProcessorPool(n_workers=5)”