spark retainedjobs maxretries jars deploy defaultcores cores apache-spark yarn spark-dataframe

apache-spark - retainedjobs - spark port maxretries



¿Por qué la tarea Spark falla con "Código de salida: 52" (1)

He tenido un error en el trabajo de Spark con un rastro como este:

./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr-Container id: container_1455622885057_0016_01_000008 ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr-Exit code: 52 ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr:Stack trace: ExitCodeException exitCode=52: ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at org.apache.hadoop.util.Shell.run(Shell.java:456) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at java.util.concurrent.FutureTask.run(FutureTask.java:262) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- at java.lang.Thread.run(Thread.java:745) ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr- ./containers/application_1455622885057_0016/container_1455622885057_0016_01_000001/stderr-Container exited with a non-zero exit code 52

Me tomó un tiempo averiguar qué significa "código de salida 52", así que lo estoy poniendo aquí para el beneficio de otros que podrían estar buscando


El código de salida 52 proviene de org.apache.spark.util.SparkExitCode, y es val OOM=52 - es decir, OutOfMemoryError. Lo cual tiene sentido ya que también encuentro esto en los registros del contenedor:

16/02/16 17:09:59 ERROR executor.Executor: Managed memory leak detected; size = 4823704883 bytes, TID = 3226 16/02/16 17:09:59 ERROR executor.Executor: Exception in task 26.0 in stage 2.0 (TID 3226) java.lang.OutOfMemoryError: Unable to acquire 1248 bytes of memory, got 0 at org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:120) at org.apache.spark.shuffle.sort.ShuffleExternalSorter.acquireNewPageIfNecessary(ShuffleExternalSorter.java:354) at org.apache.spark.shuffle.sort.ShuffleExternalSorter.insertRecord(ShuffleExternalSorter.java:375) at org.apache.spark.shuffle.sort.UnsafeShuffleWriter.insertRecordIntoSorter(UnsafeShuffleWriter.java:237) at org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:164) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)

(Tenga en cuenta que no estoy seguro en este momento si el problema está en mi código o debido a las pérdidas de memoria de Tungsten, pero ese es un problema diferente)