2017-06-22 66 views
2

没有安装的Apache 2.1.1星火我已经安装了Apache 2.1.1星火在Windows 10,与Java 1.8和Python 3.6版本4.3.1蟒蛇。我也下载了winutils.exe和设置环境avriables为JAVA_HOMEHADOOP_HOMESPARK_HOME以及更新路径变量。我也跑winutils.exe chmod -R 777 \tmp\hive。但是,在cmd提示符下运行pyspark时,出现以下错误。可以运行在Windows 10

请请能有人帮助,让我知道如果我错过了任何重要的细节提前

谢谢!

c:\Spark>bin\pyspark 
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 
Type "help", "copyright", "credits" or "license" for more information. 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 
Traceback (most recent call last): 
    File "c:\Spark\python\pyspark\sql\utils.py", line 63, in deco 
    return f(*a, **kw) 
    File "c:\Spark\python\lib\py4j-0.10.4-src.zip\py4j\protocol.py", line 319, in get_return_value 
py4j.protocol.Py4JJavaError: **An error occurred while calling o22.sessionState. 
: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':** 
     at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:981) 
     at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110) 
     at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:498) 
     at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 
     at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 

我在启动[spark-shell]时仍然出错,但它看起来像Spark启动后,因为我得到了'Welcome to Spark'部分。我得到的错误是为我工作

C:\Spark>bin\spark-shell 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/bin/../jars/datanucleus-api-jdo-3.2.6.jar." 
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/bin/../jars/datanucleus-rdbms-3.2.9.jar." 
17/06/23 12:20:15 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/C:/Spark/bin/../jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/C:/Spark/jars/datanucleus-core-3.2.10.jar." 
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState': 
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:981) 
at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:110) 
at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109) 
at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:878) 
at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$5.apply(SparkSession.scala:878) 
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) 
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) 
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) 
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) 
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) 
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:878) 
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:96) 
... 47 elided 
Caused by: java.lang.reflect.InvocationTargetException: 
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog': 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:978) 
... 58 more 
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog': 
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:169) 
at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:86) 
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101) 
at org.apache.spark.sql.SparkSession$$anonfun$sharedState$1.apply(SparkSession.scala:101) 
at scala.Option.getOrElse(Option.scala:121) 
at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:101) 
at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:100) 
at org.apache.spark.sql.internal.SessionState.<init>(SessionState.scala:157) 
at org.apache.spark.sql.hive.HiveSessionState.<init>(HiveSessionState.scala:32) 
... 63 more 
Caused by: java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:166) 
... 71 more 
Caused by: java.lang.reflect.InvocationTargetException: 
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264) 
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:358) 
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:262) 
at org.apache.spark.sql.hive.HiveExternalCatalog.<init>(HiveExternalCatalog.scala:66) 
... 76 more 
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V 
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method) 
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:524) 
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:478) 
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:532) 
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:509) 
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305) 
at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:639) 
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:561) 
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508) 
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:188) 
... 84 more 
14: error: not found: value spark 
    import spark.implicits._ 
     ^
14: error: not found: value spark 
    import spark.sql 
     ^
Welcome to 
+0

谢谢@Alfrabravo! – AmyJ

+0

启动'spark-shell'时是否也有错误,或者是否特定于'pyspark'? –

+0

@SamsonScharfrichter我已经更新了我的问题,但肯定似乎[火花壳]推出,但[pyspark]不 – AmyJ

回答

0

设置如下:“我没有使用winutils.exe”: - 安装pyspark和findspark使用“蟒蛇命令提示符”作为

pip3 install pyspark 

pip3 install findspark 

因为您已经下载了火花设置。解压缩并保存在“C”驱动器中,即“C:\ spark-2.2.0-bin-hadoop2.7”并创建新的环境变量“SPARK_HOME”并将其设置为“C:\ spark-2.2.0-bin -hadoop2.7 \ bin“并在系统变量中打开”path“变量,并在那里添加相同的变量。 现在打开命令提示符,来自“C:\用户*”来:做CD“C:\” ..两次并运行以下命令

set SPARK_HOME='spark-2.2.0-bin-hadoop2.7' 

,你是好去。 现在你只需要在你的jupyter笔记本中导入pyspark之前导入文件的位置。使用下面的代码: -

import findspark 
findspark.init('C:\spark-2.2.0-bin-hadoop2.7') 
import pyspark 
相关问题