I am running a Spark Scala job which creates several hive tables. As far as I can tell, the job succeeds and all the tables are created. However, Spark is not removing the _temporary directory created while the table’s data is being written.
How can I prevent this from happening or troubleshoot the job to understand why it is happening? I have not noticed anything suspicious in the driver or executor logs. I’m running Spark 3.1.3.