We are facing some issues with our spark application that loads data from s3 to elasticsearch. Since some weeks these jobs are not closing off correctly anymore in Kubernetes.
Description of our jobs: firstly the spark job will perform some spark operations which all go well, after this it will read and write data to elastic before closing and in there it’s going wrong. We already noticed that the communication with elastic is working as we do receive the responses but afterwards it keeps hanging when trying to move on with a java process.
A successful job will contain the following debug logs:
DEBUG RestClient: request [HEAD https://xxx:443/my-index] returned [HTTP/1.1 200 OK]
DEBUG JavaClient: Http Response Response{requestLine=HEAD / my-index HTTP/1.1, host=https://xxx:443, response=HTTP/1.1 200 OK}
res1: Boolean = true
Whilst our failing jobs keep hanging at DEBUG RestClient: request [HEAD https://xxx:443/my-index] returned [HTTP/1.1 200 OK]
and are unable to continue to DEBUG JavaClient: Http Response Response{requestLine=HEAD / my-index HTTP/1.1, host=https://xxx:443, response=HTTP/1.1 200 OK}
We have been doing a lot of tests in a spark-shell which we opened inside the hanging executor and we noticed that adding --conf "spark.driver.extraJavaOptions=-Dscala.concurrent.context.maxThreads=16"
made it possible for us to execute some successful HEAD requests both with a 200 and a 404 response both closing off correctly. However it doesn’t seem to be stable and when adding it to the spark-submit it’s also not working. Besides although it’s working let’s say 75% of the tests for a HEAD request I haven’t seen it working for a PUT /my-index?master_timeout=10000ms&timeout=10000ms request which always keeps hanging after the index creation.
Besides the spark-shell debugging I have the following info to share:
Resource usage of a healthy executor:
successful job
versus an unhealthy one (both of a similar job):
hanging job
The cpu usage goes down instead of executing the final bits of code and then keeps hanging..
Because of this I tried to play around with resources (increasing and decreasing cpu and memory), tried to use 1 executor per driver instead of 2, I played around with the following settings
spark.executor.heartbeatInterval=10s
spark.network.timeout=100s
spark.dynamicAllocation.executorIdleTimeout: 10s
and we rebooted the cluster but none of these potential solutions have worked for us.
These jobs used to work on our systems in the past with the same spark, java, elastic and Kubernetes version so I’m not sure why this is suddenly happening.
The issues started on the 13th of August.
We are currently working with the following versions: spark 3.3.1, elastic 8.15.0 and Kubernetes 1.24.17
Has anyone seen this before/has some ideas to be able to solve our issue?
Thanks!
Eline
We solved the issue: in the current version of our project we still had a relocate 'org.apache.http', 'xxx'
in a build.gradle file enabled which should have been removed in a previous version of our project. After having removed the relocate, the issue was solved. Root cause analysis is still happening.