我已经配置了Spark,以便在托管输入文件的HDFS的两个节点上运行Spark。我想将所有由metrics.properties提供的stats文件转储到HDFS或每个节点的本地目录。Spark度量标准错误:CsvReporter:写入jvm.PS-MarkSweep.count时出错
这里是我的metrics.properties的统计位置的配置:
*.sink.csv.directory=hdfs://ip:port/user/spark_stats/
我也试图做一个临时的本地目录中的每个节点,并配置metrics.properties如下:
*.sink.csv.directory=/tmp/spark_stats/
这两种方法都给出错误如下:
16/03/02 15:41:49 WARN CsvReporter: Error writing to jvm.PS-MarkSweep.count
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at com.codahale.metrics.CsvReporter.report(CsvReporter.java:241)
at com.codahale.metrics.CsvReporter.reportGauge(CsvReporter.java:234)
at com.codahale.metrics.CsvReporter.report(CsvReporter.java:150)
at com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162)
at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/03/02 15:41:49 WARN CsvReporter: Error writing to jvm.PS-MarkSweep.count
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
....
我的应用程序仍然运行并完成罚款。但火花日志文件显示写入统计文件时出错。有没有人遇到过这个问题?
后续操作:仔细查看错误信息后,所有IO错误都是由写入主jvm信息引起的。如果我指定仅转储worker,driver和executors的jvm信息,则不会出现错误。
一个修复程序,可以把在metric.properties文件这一行: executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource