Spark Job Aborted in Jupyter Notebook Dockerized Environment
I’m encountering an issue while trying to run a Spark job in a Dockerized environment. I have a Docker Compose setup with a Spark cluster configured using bitnami/spark images. However, when I execute my Spark code inside a JupyterLab container, I encounter an issue that doesn’t allow any kind of collect() operation (show(), count(), etc.).