首先,我想说我唯一看到解决这个问题的地方在这里:Spark 1.6.1 SASL。但是,添加火花和纱线认证的配置时,它仍然不起作用。纱线集群下面是我使用的火花配置火花提交关于亚马逊的电子病历:Spark SASL不能在纱线上工作
SparkConf sparkConf = new SparkConf().setAppName("secure-test");
sparkConf.set("spark.authenticate.enableSaslEncryption", "true");
sparkConf.set("spark.network.sasl.serverAlwaysEncrypt", "true");
sparkConf.set("spark.authenticate", "true");
sparkConf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
sparkConf.set("spark.kryo.registrator", "org.nd4j.Nd4jRegistrator");
try {
sparkConf.registerKryoClasses(new Class<?>[]{
Class.forName("org.apache.hadoop.io.LongWritable"),
Class.forName("org.apache.hadoop.io.Text")
});
} catch (Exception e) {}
sparkContext = new JavaSparkContext(sparkConf);
sparkContext.hadoopConfiguration().set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
sparkContext.hadoopConfiguration().set("fs.s3a.enableServerSideEncryption", "true");
sparkContext.hadoopConfiguration().set("spark.authenticate", "true");
请注意,我说的spark.authenticate到sparkContext的Hadoop配置代码,而不是核心的site.xml(其我假设我可以做到这一点,因为其他事情也起作用)。
看这里:https://github.com/apache/spark/blob/master/common/network-yarn/src/main/java/org/apache/spark/network/yarn/YarnShuffleService.java它似乎都spark.authenticate的是必要的。当我运行这个应用程序时,我得到以下堆栈跟踪。
17/01/03 22:10:23信息storage.BlockManager:向本地外部洗牌服务注册执行程序。 17/01/03 22:10:23错误client.TransportClientFactory:178 ms后引导客户端时发生异常 java.lang.RuntimeException:java.lang.IllegalArgumentException:未知消息类型:-22 at org.apache.spark。 network.shuffle.protocol.BlockTransferMessage $ Decoder.fromByteBuffer(BlockTransferMessage.java:67) at org.apache.spark.network。org.apache.spark.network.shuffle.ExternalShuffleBlockHandler.receive(ExternalShuffleBlockHandler.java:71) at org.apache.spark.network。在org.apache.spark.network.server.TransportChannelHandler.channelRead0(传输请求处理器) TransportChannelHandler.java:104) 在io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(TransportChannelHandler.java:51) (AbstractChannelHandlerContext.java:333) 在io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) 在io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254) 在io.netty .channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) at io.netty.handler.codec.MessageToMessageDecoder.ch annelRead(MessageToMessageDecoder.java:103) 在io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) 在io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) 在org.apache。 spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:86) 在io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) 在io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java: 319) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) at io.netty.channel.nio.AbstractNioByteChannel $ NioByteUnsafe.read(AbstractNioByteChannel.java:130) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) at io.netty.channel.nio。 (io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at io.netty.util.concurrent.SingleThreadEventExecutor $ 2.runJava的:116) 在java.lang.Thread.run(Thread.java:745)
火花的文档,它说
For Spark on YARN deployments, configuring spark.authenticate to true will automatically handle generating and distributing the shared secret. Each application will use a unique shared secret.
这似乎是错误的基于纱线文件中的注释
以上,但是在拍摄问题时,我仍然失去了应该去哪里找工作的地方?我是否错过了某处记载的明显东西?