kafka connector debezium stuck at snapshot of large data
I setup elasticsearch, kibana, mongodb, and kafka on the same linux server for development purposes. The server has 30GB Memory and enough disk space. I’m using a debezium connector and I’m trying to copy a large collection of about 70GB from mongodb to elasticsearch. I have set memory limits for each of elasticsearch, mongodb, and kafka, because sometimes one process will use up the available system memory and prevent the other processes from working.
Distributed tracing in outbox
I have implemented outbox and working correctly. I am building connector image with the following Dockerfile