2015-12-04 50 views
1

我使用boto3从S3读取文件,这表明它比sc.textFile(...)快得多。这些文件大约在300MB到1GB之间。这个过程是这样:PySpark在使用boto3读取大文件时抛出java.io.EOFException

data = sc.parallelize(list_of_files, numSlices=n_partitions) \ 
    .flatMap(read_from_s3_and_split_lines) 

events = data.aggregateByKey(...) 

当运行这个过程中,我得到异常:

15/12/04 10:58:00 WARN TaskSetManager: Lost task 41.3 in stage 0.0 (TID 68, 10.83.25.233): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:203) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) 
    at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
    at org.apache.spark.scheduler.Task.run(Task.scala:88) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: java.io.EOFException 
    at java.io.DataInputStream.readInt(DataInputStream.java:392) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139) 
    ... 15 more 

很多时候,只是一些任务崩盘和工作能够恢复。但是,有些时候整个工作会在发生这些错误之后崩溃。我一直无法找到这个问题的根源,并且似乎根据我阅读的文件数量,我应用的确切转换次数出现和消失......读取单个文件时它永远不会失败。

回答

2

我遇到过类似的问题,我的调查显示问题是Python进程缺少可用内存。 Spark已经把所有的内存和Python进程(PySpark工作的地方)都崩溃了。

一些建议:

  1. 添加一些内存的机器,
  2. unpersist不需要RDDS,
  3. 管理内存更聪明(加上星火内存使用有某些限制)。
相关问题