2015-12-20 65 views
2

我运行一个16 GB的MacBook Pro与埃尔卡皮坦OS。我安装使用Cloudera的码头工人形象...一切错误出

docker pull cloudera/quickstart:latest 
docker run --privileged=true --hostname=quickstart.cloudera -t -i 9f3ab06c7554 /usr/bin/docker-quickstart 

图像靴子罚款Cloudera的码头工人的形象,我可以看到大多数服务启动

Started Hadoop historyserver:        [ OK ] 
starting nodemanager, logging to /var/log/hadoop-yarn/yarn-yarn-nodemanager-quickstart.cloudera.out 
Started Hadoop nodemanager:        [ OK ] 
starting resourcemanager, logging to /var/log/hadoop-yarn/yarn-yarn-resourcemanager-quickstart.cloudera.out 
Started Hadoop resourcemanager:       [ OK ] 
starting master, logging to /var/log/hbase/hbase-hbase-master-quickstart.cloudera.out 
Started HBase master daemon (hbase-master):    [ OK ] 
starting rest, logging to /var/log/hbase/hbase-hbase-rest-quickstart.cloudera.out 
Started HBase rest daemon (hbase-rest):     [ OK ] 
starting thrift, logging to /var/log/hbase/hbase-hbase-thrift-quickstart.cloudera.out 
Started HBase thrift daemon (hbase-thrift):    [ OK ] 
Starting Hive Metastore (hive-metastore):     [ OK ] 
Started Hive Server2 (hive-server2):      [ OK ] 
Starting Sqoop Server:          [ OK ] 
Sqoop home directory: /usr/lib/sqoop2 

一些故障以及

Failure to start Spark history-server (spark-history-server[FAILED]n value: 1 
Starting Hadoop HBase regionserver daemon: starting regionserver, logging to /var/log/hbase/hbase-hbase-regionserver-quickstart.cloudera.out 
hbase-regionserver. 
Starting hue:            [FAILED] 

但一旦启动时是完整的,如果我尝试运行任何失败

例如试图运行火花壳

[[email protected] /]# spark-shell 
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000b0000000, 357892096, 0) failed; error='Cannot allocate memory' (errno=12) 
# 
# There is insufficient memory for the Java Runtime Environment to continue. 
# Native memory allocation (malloc) failed to allocate 357892096 bytes for committing reserved memory. 
# An error report file with more information is saved as: 
# //hs_err_pid3113.log 

或试图运行蜂巢外壳

[[email protected] /]# hive 
Unable to determine Hadoop version information. 
'hadoop version' returned: 
Hadoop 2.6.0-cdh5.5.0 Subversion http://github.com/cloudera/hadoop -r fd21232cef7b8c1f536965897ce20f50b83ee7b2 Compiled by jenkins on 2015-11-09T20:37Z Compiled with protoc 2.5.0 From source with checksum 98e07176d1787150a6a9c087627562c This command was run using /usr/jars/hadoop-common-2.6.0-cdh5.5.0.jar 
[[email protected] /]# 

我的问题是我能做些什么,这样我可以运行火花外壳,并成功蜂巢的壳呢?

+0

什么是您的主机操作系统? –

+0

mac osx el capitan。我的机器上有16 GB物理内存 –

+0

解决了问题。我做了一个'docker-machine stop default',然后我去了虚拟盒子,把内存提高到了8GB。现在我开始了'docker-machine start default'并运行快速启动容器。现在Hive和Spark-shell成功启动了。 –

回答

3

因为你是在Mac上运行泊坞,码头工人的VirtualBox下运行,不直接与Mac的内存。 (同样的事情会发生在Windows中)。

你可能不会在Linux主机上得到这些错误,因为码头工人未虚拟化存在。

的Cloudera的快速启动VM建议8G的内存来运行所有的服务和泊坞窗VM是只有512MB,我想。

的解决办法是停止泊坞窗机实例,打开VirtualBox的,并增加了“默认” VM的内存大小,以必要的量。

相关问题