I have a celery code that I started to play with.
I have rabbitmq on a docker container and I also have result_backed to postgres sqlite (if local it uses sqlite).
for some odd reason, when running in multiprocessing mode, tasks are getting stuck in rabbitmq.
why is that?
If I run celery with regular pool implementation (multiprocessing), all tasks I fire are getting stuck and not return any result (if I don’t set timeout it never ends),
and when I look at the rabbitmq management console I see that they are stuck in unacked.
If I close the celery app, and then reopen it, all those stuck tasks are recalled (task … recieved), but they are still stuck.
If I start the app once again, but this time with -P eventlet
or threads
, all of the stuck tasks are recalled, and resoleved (and in rabbitmq all the unacked messages become ready).
Also, when running in multiprocess mode, I see no entry in the backend, but when changing to multithreading, I see new entries (for those tasks called while celery was in multiprocessing mode)…