I read the hive table with spark 3.2, yarn log reports the error: CoarseGrainedExecutorBackend: RECEIVED SIGNAL TERM, I look at the corresponding continer log at that time, and find that the error is: container is running beyond physical memory limits. Current usage: 7.1 GB of 7 GB physical memory used; 10.0 GB of 14.7 GB virtual memory used. Killing container. However, everything else remains the same, change the Spark version to Spark 2.3 and the program will run normally.
The error spark-submit looks like this: /opt/spark32/bin/spark-submit --queue my_queue --class org.my.Query6m --name 'query6m' --driver-memory 4G --conf spark.driver.memoryOverhead=1024 --conf spark.executor.memoryOverhead=1024 --num-executors 150 --executor-memory 4G --executor-cores 2 --conf spark.yarn.maxAppAttempts=1 --conf spark.default.parallelism=1000 --conf spark.network.timeout=800s --conf spark.rpc.askTimeout=800s --conf spark.sql.hive.caseSensitiveInferenceMode=NEVER_INFER --master yarn --deploy-mode cluster
the normal spark-submit looks like this: /opt/spark23/bin/spark-submit --queue my_queue --class org.my.Query6m --name 'query6m' --driver-memory 4G --conf spark.driver.memoryOverhead=1024 --conf spark.executor.memoryOverhead=1024 --num-executors 150 --executor-memory 4G --executor-cores 2 --conf spark.yarn.maxAppAttempts=1 --conf spark.default.parallelism=1000 --conf spark.network.timeout=800s --conf spark.rpc.askTimeout=800s --conf spark.sql.hive.caseSensitiveInferenceMode=NEVER_INFER --master yarn --deploy-mode cluster
the detail container logs :
2024-07-15 14:19:51,510 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 51848 for container-id container_e106_1720079982849_57104_01_000112: 7.1 GB of 7 GB physical memory used; 10.0 GB of 14.7 GB virtual memory used
2024-07-15 14:19:51,510 WARN monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:isProcessTreeOverLimit(327)) - Process tree for container: container_e106_1720079982849_57104_01_000112 has processes older than 1 iteration running over the configured limit. Limit=7516192768, current usage = 7648337920
2024-07-15 14:19:51,510 WARN monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(516)) - Container [pid=51848,containerID=container_e106_1720079982849_57104_01_000112] is running beyond physical memory limits. Current usage: 7.1 GB of 7 GB physical memory used; 10.0 GB of 14.7 GB virtual memory used. Killing container.
Dump of the process-tree for container_e106_1720079982849_57104_01_000112 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 51848 51846 51848 51848 (bash) 0 0 116019200 368 /bin/bash -c LD_LIBRARY_PATH="/usr/hdp/current/hadoop-client/lib/native:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:::/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native::/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.6.0.3-8/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.6.0.3-8/hadoop/lib/native" /usr/java/latest/bin/java -server -Xmx6144m '-javaagent:lcc-jvm-profiler-1.0.jar=reporter=http://hbo-open.xsbi.my_test.com' -Djava.io.tmpdir=/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/tmp '-Dspark.rpc.askTimeout=800s' '-Dspark.driver.port=32780' '-Dspark.history.ui.port=18084' '-Dspark.ui.port=44446' '-Dspark.network.timeout=800s' -Dspark.yarn.app.container.log.dir=/app/log/hadoop-yarn/container/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112 -XX:OnOutOfMemoryError='kill %p' org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@hzxs-bi-hadoop-open645:32780 --executor-id 44 --hostname hzxs-bi-hadoop-open580 --cores 2 --app-id application_1720079982849_57104 --resourceProfileId 0 --user-class-path file:/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/__app__.jar --user-class-path file:/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/lcc-hook-spark_with_agent-3.2.2-1.0.jar --user-class-path file:/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/lcc-jvm-profiler-1.0.jar --user-class-path file:/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/lcc-spark_with_am_322-1.0.jar 1>/app/log/hadoop-yarn/container/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/stdout 2>/app/log/hadoop-yarn/container/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/stderr
|- 51861 51848 51848 51848 (java) 5647 2695 10590474240 1866902 /usr/java/latest/bin/java -server -Xmx6144m -javaagent:lcc-jvm-profiler-1.0.jar=reporter=http://hbo-open.xsbi.my_test.com -Djava.io.tmpdir=/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/tmp -Dspark.rpc.askTimeout=800s -Dspark.driver.port=32780 -Dspark.history.ui.port=18084 -Dspark.ui.port=44446 -Dspark.network.timeout=800s -Dspark.yarn.app.container.log.dir=/app/log/hadoop-yarn/container/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112 -XX:OnOutOfMemoryError=kill %p org.apache.spark.executor.YarnCoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@hzxs-bi-hadoop-open645:32780 --executor-id 44 --hostname hzxs-bi-hadoop-open580 --cores 2 --app-id application_1720079982849_57104 --resourceProfileId 0 --user-class-path file:/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/__app__.jar --user-class-path file:/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/lcc-hook-spark_with_agent-3.2.2-1.0.jar --user-class-path file:/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/lcc-jvm-profiler-1.0.jar --user-class-path file:/dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112/lcc-spark_with_am_322-1.0.jar
2024-07-15 14:19:51,510 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(527)) - Removed ProcessTree with root 51848
2024-07-15 14:19:51,511 INFO container.ContainerImpl (ContainerImpl.java:handle(1163)) - Container container_e106_1720079982849_57104_01_000112 transitioned from RUNNING to KILLING
2024-07-15 14:19:51,511 INFO launcher.ContainerLaunch (ContainerLaunch.java:cleanupContainer(425)) - Cleaning up container container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,523 WARN nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(249)) - Exit code from container container_e106_1720079982849_57104_01_000112 is : 143
2024-07-15 14:19:51,533 INFO monitor.ContainersMonitorImpl (ContainersMonitorImpl.java:run(464)) - Memory usage of ProcessTree 32044 for container-id container_e106_1720079982849_56945_01_035131: 2.8 GB of 4 GB physical memory used; 5.3 GB of 8.4 GB virtual memory used
2024-07-15 14:19:51,540 INFO container.ContainerImpl (ContainerImpl.java:handle(1163)) - Container container_e106_1720079982849_57104_01_000112 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
2024-07-15 14:19:51,540 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datai/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,540 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datam/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,540 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datal/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,540 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datad/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,540 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /dataf/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datae/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datak/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datac/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datag/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datah/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /datab/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:deleteAsUser(492)) - Deleting absolute path : /dataj/yarn/nm/usercache/my_push/appcache/application_1720079982849_57104/container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO container.ContainerImpl (ContainerImpl.java:handle(1163)) - Container container_e106_1720079982849_57104_01_000112 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
2024-07-15 14:19:51,541 INFO application.ApplicationImpl (ApplicationImpl.java:transition(347)) - Removing container_e106_1720079982849_57104_01_000112 from application application_1720079982849_57104
2024-07-15 14:19:51,541 INFO logaggregation.AppLogAggregatorImpl (AppLogAggregatorImpl.java:startContainerLogAggregation(512)) - Considering container container_e106_1720079982849_57104_01_000112 for log-aggregation
2024-07-15 14:19:51,541 INFO containermanager.AuxServices (AuxServices.java:handle(215)) - Got event CONTAINER_STOP for appId application_1720079982849_57104
2024-07-15 14:19:51,541 INFO yarn.YarnShuffleService (YarnShuffleService.java:stopContainer(190)) - Stopping container container_e106_1720079982849_57104_01_000112
2024-07-15 14:19:51,541 INFO yarn.YarnShuffleService (YarnShuffleService.java:stopContainer(293)) - Stopping container container_e106_1720079982849_57104_01_000112
I'm trying to figure out why spark 3.2 reports this error when everything is the same, but not when I'm using spark 2.3
The following is some examples of an HDFS file corresponding to a hive table:
0 hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/_SUCCESS
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00000.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00001.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00002.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00003.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00004.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00005.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00006.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00007.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00008.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00009.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00010.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00011.zstd.parquet
1.3 G hdfs://my-hadoop/data/dept/bi/dws/gidinfo_6m/20240712/part-r-00012.zstd.parquet
my code :
SparkConf conf = new SparkConf(); conf.setAppName("query6m"); SparkSession sparkSession = SparkSession.builder().config(conf) .enableHiveSupport() .getOrCreate();
sparkSession.sql("select * from bi_dws.gt_gidinfo_6m where day='20240712' and gid='ANDROID-00000ca775bf4edd8eff37244cf9f84b'").show(1000,false);
sparkSession.stop();
luop is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.