2017-04-12 57 views
0

我正在运行具有并行性的flink流式作业1。Flink流式作业自动失败

突然在8小时后工作失败。它显示

Association with remote system [akka.tcp://[email protected]:44863] has failed, address is now gated for [5000] ms. Reason is: [Disassociated]. 
2017-04-12 00:48:36,683 INFO org.apache.flink.yarn.YarnJobManager       - Container container_e35_1491556562442_5086_01_000002 is completed with diagnostics: Container [pid=64750,containerID=container_e35_1491556562442_5086_01_000002] is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB physical memory used; 2.9 GB of 4.2 GB virtual memory used. Killing container. 
Dump of the process-tree for container_e35_1491556562442_5086_01_000002 : 
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE 
    |- 64750 64748 64750 64750 (bash) 0 0 108654592 306 /bin/bash -c /usr/java/jdk1.7.0_67-cloudera/bin/java -Xms724m -Xmx724m -XX:MaxDirectMemorySize=1448m -Djava.library.path=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native/ -Dlog.file=/var/log/hadoop-yarn/container/application_1491556562442_5086/container_e35_1491556562442_5086_01_000002/taskmanager.log -Dlogback.configurationFile=file:logback.xml -Dlog4j.configuration=file:log4j.properties org.apache.flink.yarn.YarnTaskManagerRunner --configDir . 1> /var/log/hadoop-yarn/container/application_1491556562442_5086/container_e35_1491556562442_5086_01_000002/taskmanager.out 2> /var/log/hadoop-yarn/container/application_1491556562442_5086/container_e35_1491556562442_5086_01_000002/taskmanager.err 
    |- 64756 64750 64750 64750 (java) 269053 57593 2961149952 524252 /usr/java/jdk1.7.0_67-cloudera/bin/java -Xms724m -Xmx724m -XX:MaxDirectMemorySize=1448m -Djava.library.path=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native/ -Dlog.file=/var/log/hadoop-yarn/container/application_1491556562442_5086/container_e35_1491556562442_5086_01_000002/taskmanager.log -Dlogback.configurationFile=file:logback.xml -Dlog4j.configuration=file:log4j.properties org.apache.flink.yarn.YarnTaskManagerRunner --configDir . 

Container killed on request. Exit code is 143 
Container exited with a non-zero exit code 143 

有没有应用程序/代码方面的错误。

需要帮助了解可能的原因?

回答

2

作业被杀死,因为它超过了在纱线中设置的内存限制。 请参阅您的错误信息,这部分:

Container [pid=64750,containerID=container_e35_1491556562442_5086_01_000002] is running beyond physical memory limits. Current usage: 2.0 GB of 2 GB physical memory used; 2.9 GB of 4.2 GB virtual memory used. Killing container. 
+0

这可能是由应用程序或由于一些纱线资源管理过程中的内存消耗?我在并行性1上运行作业。 – Sohi

+0

我试图在任务管理器上使用jmap进行监视,但没有得到任何可能导致内存不足的情况。日志中也没有内存不足错误。 – Sohi

+0

我试图运行4 GB内存的容器。这次工作运行了20个小时,然后以相同的例外失败。只有我注意到,permgen空间增加了15 mb。 – Sohi