运行时pyspark
1.6.X它出现就好了。由于配置单元存在Metastore连接问题,无法运行pyspark 2.X
17/02/25 17:35:41 INFO storage.BlockManagerMaster: Registered BlockManager
Welcome to
____ __
/__/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__/.__/\_,_/_/ /_/\_\ version 1.6.1
/_/
Using Python version 2.7.13 (default, Dec 17 2016 23:03:43)
SparkContext available as sc, SQLContext available as sqlContext.
>>>
但我重置SPARK_HOME
,PYTHONPATH
和PATH
后指向火花2.x的安装,事情南下很快
(一)我必须手动删除每次德比metastore_db
。
(B)pyspark
没有启动:
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
NOTE: SPARK_PREPEND_CLASSES is set, placing locally compiled Spark classes ahead of assembly.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/02/25 17:32:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/02/25 17:32:53 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/02/25 17:32:53 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
我不需要/护理hive
功能:它打印这些不愉快的警告后挂起,但它很可能是他们中的情况下,需要火花2.X. hive
最简单的工作配置是什么使pyspark 2.X
高兴?
有警告是好的,他们只是说创建空的metastore。你在“SPARK_PREPEND_CLASSES”中附加了哪些图书馆?当pyspark初始化挂起时,你可以附加spark jvm进程的线程转储吗? – Mariusz
你有没有试过['enableHiveSupport'](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.SparkSession.Builder.enableHiveSupport)函数?即使我没有访问Hive,我在从1.6迁移到2.x时也遇到了DataFrame问题。在构建器上调用该函数解决了我的问题。 (您也可以将它添加到配置中。) – santon
@santon请做出答案:我确实有一些后续问题,但希望从授予信用开始 – javadba