The code below, which I’ve modeled after the code I’m working on and simplified for the purpose of this question, doesn’t seems to be using p.join() properly to manage the processes created by p.start() As a result, there’s a memory leak, and the K8S pod where this code runs experiences resource exhaustion. Is there something obvious I’m missing?
import multiprocessing
import random
import time
def process_C(semaphore):
try:
sleep_time = random.random() * 6
print("Process C sleeping for {:.2f} seconds".format(sleep_time))
if sleep_time > 4.2:
raise Exception("Random exception")
time.sleep(sleep_time)
print("Process C completed")
semaphore.release()
except Exception as e:
print("Process C failed: {}".format(e))
semaphore.release()
def process_B(semaphore):
p = multiprocessing.Process(target=process_C, args=(semaphore,))
semaphore.acquire()
p.start()
p.join(timeout=0) # Set timeout to 0 to prevent blocking
print("Process B completed")
def process_A(semaphore):
process_B(semaphore)
print("Process A completed")
if __name__ == "__main__":
semaphore = multiprocessing.Semaphore(20)
while True:
process_A(semaphore)
I tried checking if processes are alive and terminate them one by one with no success.