2013-07-04 68 views
1

我在cloudera cdh4上运行了一个canopy集群作业(使用mahout)。要聚集的内容有大约1m的记录(每个记录大小小于1k)。整个hadoop环境(包括所有节点)在4G内存中运行。 cdh4的安装是默认的。运行作业时出现以下异常。cloudera hadoop mapreduce job GC开销限制超出错误

看起来根据例外情况,作业客户端应该需要更高的jvm堆大小。但是,在cloudera manager中有很多jvm堆大小的配置选项。我将“客户端Java堆大小的字节数”从256MiB改为512MiB。但是,它没有改善。

设置这些堆大小选项的任何提示/提示?

13/07/03 17:12:45 INFO input.FileInputFormat: Total input paths to process : 1 
13/07/03 17:12:46 INFO mapred.JobClient: Running job: job_201307031710_0001 
13/07/03 17:12:47 INFO mapred.JobClient: map 0% reduce 0% 
13/07/03 17:13:06 INFO mapred.JobClient: map 1% reduce 0% 
13/07/03 17:13:27 INFO mapred.JobClient: map 2% reduce 0% 
13/07/03 17:14:01 INFO mapred.JobClient: map 3% reduce 0% 
13/07/03 17:14:50 INFO mapred.JobClient: map 4% reduce 0% 
13/07/03 17:15:50 INFO mapred.JobClient: map 5% reduce 0% 
13/07/03 17:17:06 INFO mapred.JobClient: map 6% reduce 0% 
13/07/03 17:18:44 INFO mapred.JobClient: map 7% reduce 0% 
13/07/03 17:20:24 INFO mapred.JobClient: map 8% reduce 0% 
13/07/03 17:22:20 INFO mapred.JobClient: map 9% reduce 0% 
13/07/03 17:25:00 INFO mapred.JobClient: map 10% reduce 0% 
13/07/03 17:28:08 INFO mapred.JobClient: map 11% reduce 0% 
13/07/03 17:31:46 INFO mapred.JobClient: map 12% reduce 0% 
13/07/03 17:35:57 INFO mapred.JobClient: map 13% reduce 0% 
13/07/03 17:40:52 INFO mapred.JobClient: map 14% reduce 0% 
13/07/03 17:46:55 INFO mapred.JobClient: map 15% reduce 0% 
13/07/03 17:55:02 INFO mapred.JobClient: map 16% reduce 0% 
13/07/03 18:08:42 INFO mapred.JobClient: map 17% reduce 0% 
13/07/03 18:59:11 INFO mapred.JobClient: map 8% reduce 0% 
13/07/03 18:59:13 INFO mapred.JobClient: Task Id : attempt_201307031710_0001_m_000001_0, Status : FAILED 
Error: GC overhead limit exceeded 
13/07/03 18:59:23 INFO mapred.JobClient: map 9% reduce 0% 
13/07/03 19:00:09 INFO mapred.JobClient: map 10% reduce 0% 
13/07/03 19:01:49 INFO mapred.JobClient: map 11% reduce 0% 
13/07/03 19:04:25 INFO mapred.JobClient: map 12% reduce 0% 
13/07/03 19:07:48 INFO mapred.JobClient: map 13% reduce 0% 
13/07/03 19:12:48 INFO mapred.JobClient: map 14% reduce 0% 
13/07/03 19:19:46 INFO mapred.JobClient: map 15% reduce 0% 
13/07/03 19:29:05 INFO mapred.JobClient: map 16% reduce 0% 
13/07/03 19:43:43 INFO mapred.JobClient: map 17% reduce 0% 
13/07/03 20:49:36 INFO mapred.JobClient: map 8% reduce 0% 
13/07/03 20:49:38 INFO mapred.JobClient: Task Id : attempt_201307031710_0001_m_000001_1, Status : FAILED 
Error: GC overhead limit exceeded 
13/07/03 20:49:48 INFO mapred.JobClient: map 9% reduce 0% 
13/07/03 20:50:31 INFO mapred.JobClient: map 10% reduce 0% 
13/07/03 20:52:08 INFO mapred.JobClient: map 11% reduce 0% 
13/07/03 20:54:38 INFO mapred.JobClient: map 12% reduce 0% 
13/07/03 20:58:01 INFO mapred.JobClient: map 13% reduce 0% 
13/07/03 21:03:01 INFO mapred.JobClient: map 14% reduce 0% 
13/07/03 21:10:10 INFO mapred.JobClient: map 15% reduce 0% 
13/07/03 21:19:54 INFO mapred.JobClient: map 16% reduce 0% 
13/07/03 21:31:35 INFO mapred.JobClient: map 8% reduce 0% 
13/07/03 21:31:37 INFO mapred.JobClient: Task Id : attempt_201307031710_0001_m_000000_0, Status : FAILED 
java.lang.Throwable: Child Error 
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250) 
Caused by: java.io.IOException: Task process exit with nonzero status of 65. 
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237) 

13/07/03 21:32:09 INFO mapred.JobClient: map 9% reduce 0% 
13/07/03 21:33:31 INFO mapred.JobClient: map 10% reduce 0% 
13/07/03 21:35:42 INFO mapred.JobClient: map 11% reduce 0% 
13/07/03 21:38:41 INFO mapred.JobClient: map 12% reduce 0% 
13/07/03 21:42:27 INFO mapred.JobClient: map 13% reduce 0% 
13/07/03 21:48:20 INFO mapred.JobClient: map 14% reduce 0% 
13/07/03 21:56:12 INFO mapred.JobClient: map 15% reduce 0% 
13/07/03 22:07:20 INFO mapred.JobClient: map 16% reduce 0% 
13/07/03 22:26:36 INFO mapred.JobClient: map 17% reduce 0% 
13/07/03 23:35:30 INFO mapred.JobClient: map 8% reduce 0% 
13/07/03 23:35:32 INFO mapred.JobClient: Task Id : attempt_201307031710_0001_m_000000_1, Status : FAILED 
Error: GC overhead limit exceeded 
13/07/03 23:35:42 INFO mapred.JobClient: map 9% reduce 0% 
13/07/03 23:36:16 INFO mapred.JobClient: map 10% reduce 0% 
13/07/03 23:38:01 INFO mapred.JobClient: map 11% reduce 0% 
13/07/03 23:40:47 INFO mapred.JobClient: map 12% reduce 0% 
13/07/03 23:44:44 INFO mapred.JobClient: map 13% reduce 0% 
13/07/03 23:50:42 INFO mapred.JobClient: map 14% reduce 0% 
13/07/03 23:58:58 INFO mapred.JobClient: map 15% reduce 0% 
13/07/04 00:10:22 INFO mapred.JobClient: map 16% reduce 0% 
13/07/04 00:21:38 INFO mapred.JobClient: map 7% reduce 0% 
13/07/04 00:21:40 INFO mapred.JobClient: Task Id : attempt_201307031710_0001_m_000001_2, Status : FAILED 
java.lang.Throwable: Child Error 
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250) 
Caused by: java.io.IOException: Task process exit with nonzero status of 65. 
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237) 

13/07/04 00:21:50 INFO mapred.JobClient: map 8% reduce 0% 
13/07/04 00:22:27 INFO mapred.JobClient: map 9% reduce 0% 
13/07/04 00:23:52 INFO mapred.JobClient: map 10% reduce 0% 
13/07/04 00:26:00 INFO mapred.JobClient: map 11% reduce 0% 
13/07/04 00:28:47 INFO mapred.JobClient: map 12% reduce 0% 
13/07/04 00:32:17 INFO mapred.JobClient: map 13% reduce 0% 
13/07/04 00:37:34 INFO mapred.JobClient: map 14% reduce 0% 
13/07/04 00:44:30 INFO mapred.JobClient: map 15% reduce 0% 
13/07/04 00:54:28 INFO mapred.JobClient: map 16% reduce 0% 
13/07/04 01:16:30 INFO mapred.JobClient: map 17% reduce 0% 
13/07/04 01:32:05 INFO mapred.JobClient: map 8% reduce 0% 
13/07/04 01:32:08 INFO mapred.JobClient: Task Id : attempt_201307031710_0001_m_000000_2, Status : FAILED 
Error: GC overhead limit exceeded 
13/07/04 01:32:21 INFO mapred.JobClient: map 9% reduce 0% 
13/07/04 01:33:26 INFO mapred.JobClient: map 10% reduce 0% 
13/07/04 01:35:37 INFO mapred.JobClient: map 11% reduce 0% 
13/07/04 01:38:48 INFO mapred.JobClient: map 12% reduce 0% 
13/07/04 01:43:06 INFO mapred.JobClient: map 13% reduce 0% 
13/07/04 01:49:58 INFO mapred.JobClient: map 14% reduce 0% 
13/07/04 01:59:07 INFO mapred.JobClient: map 15% reduce 0% 
13/07/04 02:12:00 INFO mapred.JobClient: map 16% reduce 0% 
13/07/04 02:37:56 INFO mapred.JobClient: map 17% reduce 0% 
13/07/04 03:31:55 INFO mapred.JobClient: map 8% reduce 0% 
13/07/04 03:32:00 INFO mapred.JobClient: Job complete: job_201307031710_0001 
13/07/04 03:32:00 INFO mapred.JobClient: Counters: 7 
13/07/04 03:32:00 INFO mapred.JobClient: Job Counters 
13/07/04 03:32:00 INFO mapred.JobClient:  Failed map tasks=1 
13/07/04 03:32:00 INFO mapred.JobClient:  Launched map tasks=8 
13/07/04 03:32:00 INFO mapred.JobClient:  Data-local map tasks=8 
13/07/04 03:32:00 INFO mapred.JobClient:  Total time spent by all maps in occupied slots (ms)=11443502 
13/07/04 03:32:00 INFO mapred.JobClient:  Total time spent by all reduces in occupied slots (ms)=0 
13/07/04 03:32:00 INFO mapred.JobClient:  Total time spent by all maps waiting after reserving slots (ms)=0 
13/07/04 03:32:00 INFO mapred.JobClient:  Total time spent by all reduces waiting after reserving slots (ms)=0 
Exception in thread "main" java.lang.RuntimeException: java.lang.InterruptedException: Canopy Job failed processing vector 
+0

您的应用程序是否需要使用大量内存?如果没有,那么应用程序可能会在整个内存中出现一些错误。 – zsxwing

+0

它正在运行mahout canopy集群,所以不应该是应用程序错误。我可以看到每个孩子的客户端分配了大约200MB,这在我的情况下可能还不够。 – Robin

+0

@zsxwing你应该把它写成“-Xmx1024M”,正是因为这个原因:你把一个太多的零放在那里。那是10.24G –

回答

0

您需要更改Hadoop的内存设置,分配给Hadoop的内存是不够的,以适应正在运行的工作要求,尝试增加堆内存和验证,由于对内存的使用操作系统可能会因为哪项工作失败而导致进程死机。

2

Mahout工作是非常内存密集型。我不知道映射器或缩减器是否是罪魁祸首,但是,无论哪种方式,您都必须告诉Hadoop为他们提供更多内存。 “超出GC开销限制”只是一种说“内存不足”的方式 - 意味着JVM放弃尝试回收可用RAM的最后0.01%。

如何设置这个确实有点复杂,因为有几个属性,它们在Hadoop 2中进行了更改.CDH4可以支持Hadoop 1或2 - 您使用哪一个?

如果我不得不猜测:设置mapreduce.child.java.opts-Xmx1g。但正确的答案真的取决于你的版本和你的数据。

相关问题