What causes the container killed error in spark 3.2?
I read the hive table with spark 3.2, yarn log reports the error: CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM, I look at the corresponding continer log at that time, and find that the error is: container is running beyond physical memory limits. Current usage: 7.1 GB of 7 GB physical memory used; 10.0 GB of 14.7 GB virtual memory used. Killing container. However, everything else remains the same, change the Spark version to Spark 2.3 and the program will run normally.