我是Spark的新手。我正在尝试将Spark 2.1版本用于CEP目的。 在最近2分钟内检测到丢失的事件。我将接收到的输入转换为JavaDSStream的输入事件,然后在inputEvents上执行reducebykeyandWindow并执行spark sql。Spark RDD vs DataSet性能
JavaPairDStream<String, Long> reduceWindowed = inputEvents.reduceByKeyAndWindow(new MaxTimeFuntion(),
Durations.seconds(124), new Duration(2000));
reduceWindowed.foreachRDD((rdd, time) -> {
SparkSession spark = TestSparkSessionSingleton.getInstance(rdd.context().getConf());
JavaRDD<EventData> rowRDD = rdd.map(new org.apache.spark.api.java.function.Function<Tuple2<String,Long>, EventData>() {
@Override
public EventData call(Tuple2<String, Long> javaRDD) {
{
EventData record = new EventData();
record.setId(javaRDD._1);
record.setEventTime(javaRDD._2);
return record;
}
})
Dataset<Row> eventDataFrames = spark.createDataFrame(rowRDD, EventData.class);
eventDataFrames.createOrReplaceTempView("checkins");
Dataset<Row> resultRows=
spark.sql("select id, max(eventTime) as maxval, from events group by id having (unix_timestamp()*1000 - maxval >= 120000)");
相同的过滤我执行使用RDD功能:
JavaPairDStream<String, Long> filteredStream = reduceWindowed.filter(new Function<Tuple2<String,Long>, Boolean>() {
public Boolean call(Tuple2<String,Long> val)
{
return (System.currentTimeMillis() - val._2() >= 120000);
}
});
filteredStream.print();
无论是方法提供我相同的结果为数据集& RDD。
我是否正确使用Spark sql?
在本地模式下,对于相同的输入速率,Spark SQL查询执行消耗的CPU相对高于RDD函数。谁能帮助我了解为什么SQL星火相比消耗RDD过滤功能比较高的CPU ..