我的Hadoop版本是 - 2.6.0 -cdh5.10.0 我正在使用Cloudera Vm。无法通过map reduce java程序访问Hadoop hdfs文件系统
我想通过我的代码访问hdfs文件系统来访问文件并将其添加为输入或缓存文件。
当我尝试通过命令行访问hdfs文件时,能够列出文件。
命令:
[[email protected] java]$ hadoop fs -ls hdfs://localhost:8020/user/cloudera
Found 5items
-rw-r--r-- 1 cloudera cloudera 106 2017-02-19 15:48 hdfs://localhost:8020/user/cloudera/test
drwxr-xr-x - cloudera cloudera 0 2017-02-19 15:42 hdfs://localhost:8020/user/cloudera/test_op
drwxr-xr-x - cloudera cloudera 0 2017-02-19 15:49 hdfs://localhost:8020/user/cloudera/test_op1
drwxr-xr-x - cloudera cloudera 0 2017-02-19 15:12 hdfs://localhost:8020/user/cloudera/wc_output
drwxr-xr-x - cloudera cloudera 0 2017-02-19 15:16 hdfs://localhost:8020/user/cloudera/wc_output1
当我尝试访问通过我的地图一样的东西减少了程序,我收到文件未发现异常。 我的地图减少样品的配置的代码是:
public int run(String[] args) throws Exception {
\t \t
\t \t Configuration conf = getConf();
\t \t
\t \t if (args.length != 2) {
\t \t \t System.err.println("Usage: test <in> <out>");
\t \t \t System.exit(2);
\t \t }
\t \t
\t \t ConfigurationUtil.dumpConfigurations(conf, System.out);
\t \t
\t \t LOG.info("input: " + args[0] + " output: " + args[1]);
\t \t
\t \t Job job = Job.getInstance(conf);
\t \t
\t \t job.setJobName("test");
\t \t
\t \t job.setJarByClass(Driver.class);
\t \t job.setMapperClass(Mapper.class);
\t \t job.setReducerClass(Reducer.class);
\t \t job.setMapOutputKeyClass(Text.class);
\t \t job.setMapOutputValueClass(Text.class);
\t \t
\t \t job.setOutputKeyClass(Text.class);
\t \t job.setOutputValueClass(DoubleWritable.class);
\t \t
\t \t
\t \t job.addCacheFile(new Path("hdfs://localhost:8020/user/cloudera/test/test.tsv").toUri());
\t \t
\t \t
\t \t FileInputFormat.addInputPath(job, new Path(args[0]));
\t \t FileOutputFormat.setOutputPath(job, new Path(args[1]));
\t \t
\t \t
\t \t boolean result = job.waitForCompletion(true);
\t \t return (result) ? 0 : 1;
\t }
在上述片段中的线job.addCacheFile返回FileNotFound异常。
2)我的第二个问题是:
我在核心的site.xml分项为localhost:在命令提示符下9000默认HDFS文件系统URI.But我只能够访问默认HDFS文件系统在端口8020,而不是在9000.当我尝试使用端口9000时,我结束了ConnectionRefused Exception。我不确定从何处读取配置。
我核心的site.xml如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<!--
<property>
<name>hadoop.tmp.dir</name>
<value>/Users/student/tmp/hadoop-local/tmp</value>
<description>A base for other temporary directories.</description>
</property>
-->
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>Default file system URI. URI:scheme://authority/path scheme:method of access authority:host,port etc.</description>
</property>
</configuration>
我HDFS-site.xml中如下:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
\t <property>
\t \t <name>dfs.name.dir</name>
\t \t <value>/tmp/hdfs/name</value>
\t \t <description>Determines where on the local filesystem the DFS name
\t \t \t node should store the name table(fsimage).</description>
\t </property>
\t <property>
\t \t <name>dfs.data.dir</name>
\t \t <value>/tmp/hdfs/data</value>
\t \t <description>Determines where on the local filesystem an DFS data node should store its blocks.</description>
\t </property>
\t
\t <property>
\t \t <name>dfs.replication</name>
\t \t <value>1</value>
\t \t <description>Default block replication.Usually 3, 1 in our case
\t \t </description>
\t </property>
</configuration>
我接受艾尔文以下异常:
java.io.FileNotFoundException: hdfs:/localhost:8020/user/cloudera/test/ (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at java.io.FileInputStream.<init>(FileInputStream.java:101)
at java.io.FileReader.<init>(FileReader.java:58)
at hadoop.TestDriver$ActorWeightReducer.setup(TestDriver.java:104)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:168)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
任何帮助将是有益的!
你可以分享你是给当你尝试通过地图文件减少 –
@siddhartha耆那教的说法:Hadoop的Test.jar的路径到driverclass HDFS路径到输入输出 – user1477232
你可以发布由程序抛出的异常 –