2015-04-28 76 views
0

我已将整个文件夹作为MR作业的输入。运行MapReduce程序时出现“Java堆空间:OutOfMemoryError”

我已经使用CombineFileBinaryInputFormat(扩展了CombineFileInputFormat)作为我的MR作业的输入格式。我在我的CombineFileBinaryInputFormat构造函数中使用了“setMaxSplitSize(262144000)”这个方法,因为我的块大小是250MB。文件拆分是通过数据包发生的,我应该在某个地方进行检查以测试限制是否超过250MB或者是否隐含。完整的代码可在here获得。

但是我在运行MapReduce程序时遇到了“Java堆空间”错误。

下面的代码,以供参考部分:

public class CombineBinaryInputFormat extends CombineFileInputFormat<KeyWritable, ValueWritable>{ 

    public CombineBinaryInputFormat(){ 
     super(); 
     setMaxSplitSize(262144000); 
     } 

My StackTrace: 
============== 
    15/05/05 11:52:47 INFO input.FileInputFormat: Total input paths to process : 318 
    15/05/05 11:52:47 INFO input.CombineFileInputFormat: DEBUG: Terminated node allocation with : CompletedNodes: 1, size left: 52027734 
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: number of splits:1 
    15/05/05 11:52:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local634564612_0001 
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 
    15/05/05 11:52:47 WARN conf.Configuration: file:/app/hadoop/tmp/mapred/staging/raghuveer634564612/.staging/job_local634564612_0001/job.xml:an attempt to override final parameter: mapreduce.job. 
end-notification.max.attempts; Ignoring. 
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 
    15/05/05 11:52:48 WARN conf.Configuration: file:/var/hadoop/mapreduce/localRunner/raghuveer/job_local634564612_0001/job_local634564612_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 
    15/05/05 11:52:48 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 
    15/05/05 11:52:48 INFO mapreduce.Job: Running job: job_local634564612_0001 
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter set in config null 
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Waiting for map tasks 
    15/05/05 11:52:48 INFO mapred.LocalJobRunner: Starting task: attempt_local634564612_0001_m_000000_0 
    15/05/05 11:52:48 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 
    15/05/05 11:52:48 INFO mapred.MapTask: Processing split: Paths:/user/usr/local/upload/20120713T07-45-42.682358000Z_79.150.138.86-1412.c2s_ndttrace:0+78550,/user/usr/local/upload/20120713T07-45-43.356723000Z_151.40.240.66-53426.c2s_ndttrace:0+32768,/user/usr/local/upload/20120713T07-45-43.718556000Z_85.26.235.102-25300.c2s_ndttrace:0+10130,/user/usr/local/upload 
     ..... 
     ..... 
     ..... 
/20120713T08-33-41.259331000Z_84.122.129.103-61321.c2s_ndttrace:0+19148,/user/usr/local/upload/20120713T08-33-54.972649000Z_86.69.144.214-49599.c2s_ndttrace:0+63014,/user/usr/local/upload/20120713T08-33-56.162340000Z_41.143.91.156-50785.c2s_ndttrace:0+13658,/user/usr/local/upload/20120713T08-33-59.768261000Z_31.187.12.141-50274.c2s_ndttrace:0+126542,/user/usr/local/upload/20120713T08-34-03.950055000Z_78.119.172.109-51495.c2s_ndttrace:0+92676,/user/usr/local/upload/20120713T08-34-08.378534000Z_87.7.113.115-62238.c2s_ndttrace:0+49410,/user/usr/local/upload/20120713T08-34-26.258570000Z_151.13.227.66-33198.c2s_ndttrace:0+2666092 
    15/05/05 11:52:49 INFO mapreduce.Job: Job job_local634564612_0001 running in uber mode : false 
    15/05/05 11:52:49 INFO mapreduce.Job: map 0% reduce 0% 
    15/05/05 11:52:50 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer 
    15/05/05 11:52:53 INFO mapred.MapTask: (EQUATOR) 0 kvi 78643196(314572784) 
    15/05/05 11:52:53 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 300 
    15/05/05 11:52:53 INFO mapred.MapTask: soft limit at 251658240 
    15/05/05 11:52:53 INFO mapred.MapTask: bufstart = 0; bufvoid = 314572800 
    15/05/05 11:52:53 INFO mapred.MapTask: kvstart = 78643196; length = 19660800 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (82) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:54 WARN pcap.PcapReader: Payload start (74) is larger than packet data (68). Returning empty payload. 
    15/05/05 11:52:55 INFO mapred.MapTask: Starting flush of map output 
    15/05/05 11:52:55 INFO mapred.MapTask: Spilling map output 
    15/05/05 11:52:55 INFO mapred.MapTask: bufstart = 0; bufend = 105296; bufvoid = 314572800 
    15/05/05 11:52:55 INFO mapred.MapTask: kvstart = 78643196(314572784); kvend = 78637988(314551952); length = 5209/19660800 
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map > map 
    15/05/05 11:52:55 INFO mapred.MapTask: Finished spill 0 
    15/05/05 11:52:55 INFO mapred.LocalJobRunner: map task executor complete. 
    15/05/05 11:52:55 WARN mapred.LocalJobRunner: job_local634564612_0001 
    java.lang.Exception: java.lang.OutOfMemoryError: Java heap space 
     at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
     at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
    Caused by: java.lang.OutOfMemoryError: Java heap space 
     at net.ripe.hadoop.pcap.PcapReader.nextPacket(PcapReader.java:208) 
     at net.ripe.hadoop.pcap.PcapReader.access$0(PcapReader.java:173) 
     at net.ripe.hadoop.pcap.PcapReader$PacketIterator.fetchNext(PcapReader.java:554) 
     at net.ripe.hadoop.pcap.PcapReader$PacketIterator.hasNext(PcapReader.java:559) 
     at net.ripe.hadoop.pcap.io.reader.PcapRecordReader.nextKeyValue(PcapRecordReader.java:57) 
     at net.ripe.hadoop.pcap.io.reader.CombineBinaryRecordReader.nextKeyValue(CombineBinaryRecordReader.java:42) 
     at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69) 
     at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:533) 
     at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) 
     at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) 
     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) 
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) 
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) 
     at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 
    15/05/05 11:52:56 INFO mapreduce.Job: Job job_local634564612_0001 failed with state FAILED due to: NA 
    15/05/05 11:52:56 INFO mapreduce.Job: Counters: 25 
     File System Counters 
      FILE: Number of bytes read=29002348 
      FILE: Number of bytes written=29450636 
      FILE: Number of read operations=0 
      FILE: Number of large read operations=0 
      FILE: Number of write operations=0 
      HDFS: Number of bytes read=103142 
      HDFS: Number of bytes written=0 
      HDFS: Number of read operations=6 
      HDFS: Number of large read operations=0 
      HDFS: Number of write operations=1 
     Map-Reduce Framework 
      Map input records=1303 
      Map output records=1303 
      Map output bytes=105296 
      Map output materialized bytes=0 
      Input split bytes=38078 
      Combine input records=0 
      Spilled Records=0 
      Failed Shuffles=0 
      Merged Map outputs=0 
      GC time elapsed (ms)=593 
      CPU time spent (ms)=0 
      Physical memory (bytes) snapshot=0 
      Virtual memory (bytes) snapshot=0 
      Total committed heap usage (bytes)=1745092608 
     File Input Format Counters 
      Bytes Read=0 

在这里,我要送几百个文件输入到MapReduce工作和我使用默认的块大小即64MB和我的内存大小是4GB,我在32位系统上使用hadoop。现在,我正面临Java堆空间错误。如果我给数以百计的文件作为MR作业的输入,并以64MB作为块,那么是否有任何解决方案可以解决此问题大小和使用CombineFileInputFormat和RAM 4GB。

请建议我在这个问题上...

+0

有多少个文件和多少个块? –

+0

文件数量:318,Noblocks:1(defaultblocksize:64MB),Hadoop在32位系统上运行 –

回答

-1

至于逻辑去... ...分割大小不会导致Java堆空间错误。

它必须做一些与你的代码逻辑一样,对于给定的密钥聚合内存中的太多数据。

你能否提供stackTrace作进一步分析

+0

'分割大小永远不会导致Java堆空间错误'我不同意,CombineFileInputFormat通常是第一个原因耗尽内存 - 取决于输入作业的文件数量。 –

+0

它与组合文件输入格式无关,因为fileinput格式只是选择如何拆分以及如何读取记录(Record reader)。提供堆栈跟踪将确认这一点, – KrazyGautam

+0

'CombineFileInputFormat'批处理文件,因此名称。根据需要组合的文件数量,将文件一起批量处理需要相当多的RAM。 –

相关问题