2012-09-20 46 views
1

我只是在Psuedo-Distributed模式下设置Hadoop/Yarn 2.x(特别是v0.23.3)。Hadoop/Yarn(v0.23.3)Psuedo-Distributed模式设置::无作业节点

我遵循几个博客&网站的指示,或多或少地提供 相同的处方来设置它。我也跟着O'reilly的第三版 Hadoop书(讽刺的是最不有帮助)。

问题:

After running "start-dfs.sh" and then "start-yarn.sh", while all of the daemons 
do start (as indicated by jps(1)), the Resource Manager web portal 
(Here: http://localhost:8088/cluster/nodes) indicates 0 (zero) job-nodes in the 
cluster. So while submitting the example/test Hadoop job indeed does get 
scheduled, it pends forever because, I assume, the configuration doesn't see a 
node to run it on. 

Below are the steps I performed, including resultant configuration files. 
Hopefully the community help me out... (And thank you in advance). 

配置:

以下环境变量被设置在两个我和Hadoop的UNIX帐户配置文件:〜/ .profile文件:

export HADOOP_HOME=/home/myself/APPS.d/APACHE_HADOOP.d/latest 
    # Note: /home/myself/APPS.d/APACHE_HADOOP.d/latest -> hadoop-0.23.3 

export HADOOP_COMMON_HOME=${HADOOP_HOME} 
export HADOOP_INSTALL=${HADOOP_HOME} 
export HADOOP_CLASSPATH=${HADOOP_HOME}/lib 
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop/conf 
export HADOOP_MAPRED_HOME=${HADOOP_HOME} 
export YARN_HOME=${HADOOP_HOME} 
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop/conf 
export JAVA_HOME=/usr/lib/jvm/jre 

Hadoop的$ java -version

java version "1.7.0_06-icedtea<br> 
OpenJDK Runtime Environment (fedora-2.3.1.fc17.2-x86_64)<br> 
OpenJDK 64-Bit Server VM (build 23.2-b09, mixed mode)<br> 

# Although the above shows OpenJDK, the same problem happens with Sun's JRE/JDK. 

名称节点& Datanode的目录中,等/ Hadoop的/ conf目录/ HDFS-site.xml中还规定:

/home/myself/APPS.d/APACHE_HADOOP.d/latest/YARN_DATA.d/HDFS.d/DATANODE.d/ 
/home/myself/APPS.d/APACHE_HADOOP.d/latest/YARN_DATA.d/HDFS.d/NAMENODE.d/ 

接下来,各种XML配置文件(再次,纱/ MRv2/v0.23.3这里) :

hadoop$ pwd; ls -l 
/home/myself/APPS.d/APACHE_HADOOP.d/latest/etc/hadoop/conf 
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 core-site.xml -> ../core-site.xml 
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 hdfs-site.xml -> ../hdfs-site.xml 
lrwxrwxrwx 1 hadoop hadoop 18 Sep 20 13:14 httpfs-site.xml -> ../httpfs-site.xml 
lrwxrwxrwx 1 hadoop hadoop 18 Sep 20 13:14 mapred-site.xml -> ../mapred-site.xml 
-rw-rw-r-- 1 hadoop hadoop 10 Sep 20 15:36 slaves 
lrwxrwxrwx 1 hadoop hadoop 16 Sep 20 13:14 yarn-site.xml -> ../yarn-site.xml 

核心的site.xml

<?xml version="1.0"?> 
<!-- core-site.xml --> 
<configuration> 
    <property> 
    <name>fs.default.name</name> 
    <value>hdfs://localhost/</value> 
    </property> 
</configuration> 

mapred-site.xml中

<?xml version="1.0"?> 
<!-- mapred-site.xml --> 
<configuration> 

    <!-- Same problem whether this (legacy) stanza is included or not. --> 
    <property> 
    <name>mapred.job.tracker</name> 
    <value>localhost:8021</value> 
    </property> 

    <property> 
    <name>mapreduce.framework.name</name> 
    <value>yarn</value> 
    </property> 
</configuration> 

HDFS-site.xml中

<!-- hdfs-site.xml --> 
<configuration> 
    <property> 
    <name>dfs.replication</name> 
    <value>1</value> 
    </property> 
    <property> 
    <name>dfs.namenode.name.dir</name> 
    <value>file:/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/HDFS.d/NAMENODE.d</value> 
    </property> 
    <property> 
    <name>dfs.datanode.data.dir</name> 
    <value>file:/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/HDFS.d/DATANODE.d</value> 
    </property> 
</configuration> 

纱的site.xml

<?xml version="1.0"?> 
<!-- yarn-site.xml --> 
<configuration> 
    <property> 
    <name>yarn.resourcemanager.address</name> 
    <value>localhost:8032</value> 
    </property> 
    <property> 
    <name>yarn.nodemanager.aux-services</name> 
    <value>mapreduce.shuffle</value> 
    </property> 
    <property> 
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> 
    <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
    </property> 
    <property> 
    <name>yarn.nodemanager.resource.memory-mb</name> 
    <value>4096</value> 
    </property> 
    <property> 
    <name>yarn.nodemanager.local-dirs</name> 
    <value>/home/myself/APPS.d/APACHE_HADOOP.d/YARN_DATA.d/TEMP.d</value> 
    </property> 
</configuration> 

等/ Hadoop的/ conf目录/保存

localhost 
    # Community/friends, is this entry correct/needed for my psuedo-dist mode? 

杂总结注释:

(1) As you may have gleaned from above, all files/directories are owned 
    by the 'hadoop' UNIX user. There is a hadoop:hadoop, UNIX User and 
    Group, respectively. 

(2) The following command was run after the NAMENODE & DATANODE directories 
    (listed above) were created (and whose paths were entered into 
    hdfs-site.xml): 

    hadoop$ hadoop namenode -format 

(3) Next, I ran "start-dfs.sh", then "start-yarn.sh". 
    Here is jps(1) output: 

[email protected]$ jps 
    21979 DataNode 
    22253 ResourceManager 
    22384 NodeManager 
    22156 SecondaryNameNode 
    21829 NameNode 
    22742 Jps 

谢谢!

+0

不知道,但应该'文件:/'是'文件://'? – scarcer

回答

0

经过在这个问题上没有成功(并且相信我,我尝试了所有的一切)后,我制定了 hadoop使用不同的解决方案。而上面我从一个下载镜像中下载了hadoop发行版的gzip/tar球 ,这个 我使用了RPM软件包的Caldera CDH发行版,我通过 安装了它们的YUM回购版。希望这可以帮助某人,下面是详细的步骤。

步骤-1:

对于Hadoop 0.20。X(MapReduce的版本1):

# rpm -Uvh http://archive.cloudera.com/redhat/6/x86_64/cdh/cdh3-repository-1.0-1.noarch.rpm 
    # rpm --import http://archive.cloudera.com/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera 
    # yum install hadoop-0.20-conf-pseudo 

- 或 -

Hadoop的0.23.x(MapReduce的版本2):

# rpm -Uvh http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.noarch.rpm 
    # rpm --import http://archive.cloudera.com/cdh4/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera 
    # yum install hadoop-conf-pseudo 

在上述两种情况下,安装该 “伪” 包(代表“伪分布式 Hadoop”模式),将单独方便地触发您需要的所有其他必要软件包(通过依赖关系解析)的安装。

步骤2:

安装Sun/Oracle的Java JRE(如果你还没有这样做的话)。您可以通过它们提供的RPM或者gzip/tar球便携式版本 来安装它。只要您设置并正确导出“JAVA_HOME”环境,并确保$ {JAVA_HOME}/bin/java在您的路径中,这并不重要。

# echo $JAVA_HOME; which java 
    /home/myself/APPS.d/JAVA-JRE.d/jdk1.7.0_07 
    /home/myself/APPS.d/JAVA-JRE.d/jdk1.7.0_07/bin/java 

注:其实我创建一个名为“最新”和点/重新指向它的符号链接,每当我更新Java JAVA的 版本特定的目录。我明确以上为 读者的理解。

第3步:将hdfs格式化为“hdfs”Unix用户(在上面的“yum install”中创建)。

# sudo su hdfs -c "hadoop namenode -format" 

步骤4:

手动启动Hadoop守护进程。

for file in `ls /etc/init.d/hadoop*` 
    do 
    { 
    ${file} start 
    } 
    done 

第五步:

检查,看看是否一切正常。以下是MapReduce v1 (这与表面层次上的MapReduce v2没有多大区别)。

root# jps 
    23104 DataNode 
    23469 TaskTracker 
    23361 SecondaryNameNode 
    23187 JobTracker 
    23267 NameNode 
    24754 Jps 

    # Do the next commands as yourself (not as "root"). 
    myself$ hadoop fs -mkdir /foo 
    myself$ hadoop fs -rmr /foo 
    myself$ hadoop jar /usr/lib/hadoop-0.20/hadoop-0.20.2-cdh3u5-examples.jar pi 2 100000 

我希望这有助于!

+0

P.S.这是在Fedora-17 x86-64位O/S上完成的。 –