2016-01-18 144 views
1

合并文件来自这个问题我以前的线程Java OutOfMemoryError while merge large file parts from chunked files爪哇 - 无法从分块文件

我成功地从答案执行代码和合并大组块文件到单个文件不OOME错误续集的问题。 但是,该文件始终是损坏的并且与原始文件不同。

这是我刚才的问题的原代码,

private void mergeFile(String identifier, int totalFile, String outputFile) throws AppException{ 
    File[] fileDatas = new File[totalFile]; 
    byte fileContents[] = null; 
    int totalFileSize = 0; 
    int filePartUploadSize = 0; 
    int tempFileSize = 0; 

    for (int i = 0; i < totalFile; i++) { 
     fileDatas[i] = new File(identifier + "." + (i + 1)); 
     totalFileSize += fileDatas[i].length(); 
    } 

    try { 
     fileContents = new byte[totalFileSize]; // allocating memory to contain all the byte stream for further writing 
     InputStream inStream; 
     for (int j = 0; j < totalFile; j++) { 
      inStream = new BufferedInputStream(new FileInputStream(fileDatas[j])); 
      filePartUploadSize = (int) fileDatas[j].length(); 
      inStream.read(fileContents, tempFileSize, filePartUploadSize); 
      tempFileSize += filePartUploadSize; 
      inStream.close(); 
     } 
    } catch (FileNotFoundException ex) { 
     throw new AppException(AppExceptionCode.FILE_NOT_FOUND); 
    } catch (IOException ex) { 
     throw new AppException(AppExceptionCode.ERROR_ON_MERGE_FILE); 
    } finally { 
     write(fileContents, outputFile); 
     for (int l = 0; l < totalFile; l++) { 
      fileDatas[l].delete(); 
     } 
    } 
} 

private void write(byte[] DataByteArray, String DestinationFileName) throws AppException{ 
    try { 
     OutputStream output = null; 
     try { 
      output = new BufferedOutputStream(new FileOutputStream(DestinationFileName)); 
      output.write(DataByteArray); 
     } finally { 
      output.close(); 
     } 
    } catch (FileNotFoundException ex) { 
     throw new AppException(AppExceptionCode.FILE_NOT_FOUND, ex); 
    } catch (IOException ex) { 
     throw new AppException(AppExceptionCode.ERROR_ON_WRITE_FILE, ex); 
    } 
} 

正如你所看到的,从以前的问题,我以包含的所有文件分配的字节数组的巨量,然后写到单个文件中。拥有这种技术会给我带来的好处是避免在合并文件时损坏数据。但是,这会在合并大文件(> 1 GB)时给我OOME(OutOfMemoryError)。

这是一个用吉姆回答

private void merge(String identifier, int totalFile, String outputFile) throws AppException{ 
    ArrayList<File> files = new ArrayList<>();// put your files here 
    for (int i = 0; i < totalFile; i++) { 
     files.add(new File(identifier + "." + (i + 1))); 
    } 

    File output = new File(outputFile); 
    BufferedOutputStream boss = null; 
    try { 
     boss = new BufferedOutputStream(new FileOutputStream(output)); 
     for (File file : files) { 
      BufferedInputStream bis = null; 
      try { 
       bis = new BufferedInputStream(new FileInputStream(file)); 
       boolean done = false; 
       while (!done) { 
        int data = bis.read(); 
        boss.write(data); 
        done = data < 0; 
       } 
      } catch (Exception e) { 
       throw new AppException(AppExceptionCode.FILE_NOT_FOUND, e.getMessage()); 
      } finally { 
       try { 
        bis.close();//do this in a try catch just in case 
       } catch (Exception e) { 
        throw new AppException(AppExceptionCode.ERROR_ON_MERGE_FILE,e.getMessage()); 
       } 
      } 
     } 
    } catch (Exception e) { 
     throw new AppException(AppExceptionCode.ERROR_ON_MERGE_FILE,e.getMessage()); 
    } finally { 
     try { 
      boss.close(); 
     } catch (Exception e) { 
      throw new AppException(AppExceptionCode.ERROR_ON_MERGE_FILE,e.getMessage()); 
     } 
    } 
    for (File file : files) { 
     file.delete(); 
    } 

} 

该方法可避免OOME异常,一个新的合并方法,但这些文件始终是腐败...

你能告诉我有一个根本性的错误这种方法,或者你可以给我一个更好的代码... 非常感谢....

回答

0

可能会改变这样的循环将解决问题。

 int data = -1; 
     while ((data = bis.read()) !=-1) { 
      boss.write(data); 
     } 

这样可以减少将无效字符写入文件的机会。

+0

无论如何,对于任何有问题的人,而合并成千上万的文件......请随意使用上面的代码.... – wahyuagungs