2015-05-19 83 views
5

我试过几个不同的场景来尝试使用Spark的1.3 DataFrames来处理像sciPy kurtosis或numpy std这样的事情。以下是示例代码,但它只是挂在10x10数据集(10行,10列)上。我已经试过:pySpark DataFrames与SciPy的聚合函数

print df.groupBy().agg(kurtosis(df.offer_id)).collect() 

print df.agg(kurtosis(df.offer_ID)).collect() 

但这个工程没有问题:

print df.agg(F.min(df.offer_id), F.min(df.decision_id)).collect() 

我的猜测是因为F是:from pyspark.sql import functions as F是编程在一个SQL函数我怎么会用dataframes做事喜欢峰度。在数据集上?

这也只是挂:

print df.map(kurtosis(df.offer_id)).collect() 

回答

2

可悲的是星火SQL对Python的UDF的当前UDF支持是有点欠缺。我一直在试图在Scala中添加一些UDF,并让它们可以从Python中调用,因为我正在开发一个项目,所以我使用kurtosis作​​为UDAF实现的一个快速概念证明。该分公司目前住在https://github.com/holdenk/sparklingpandas/tree/add-kurtosis-support

的第一步是定义在斯卡拉我们UDAF - 这可能是不太理想的,但这里是一个实现:

object functions { 
    def kurtosis(e: Column): Column = new Column(Kurtosis(EvilSqlTools.getExpr(e))) 
} 

case class Kurtosis(child: Expression) extends AggregateExpression { 
    def this() = this(null) 

    override def children = child :: Nil 
    override def nullable: Boolean = true 
    override def dataType: DataType = DoubleType 
    override def toString: String = s"Kurtosis($child)" 
    override def newInstance() = new KurtosisFunction(child, this) 
} 

case class KurtosisFunction(child: Expression, base: AggregateExpression) extends AggregateFunction { 
    def this() = this(null, null) 

    var data = scala.collection.mutable.ArrayBuffer.empty[Any] 
    override def update(input: Row): Unit = { 
    data += child.eval(input) 
    } 

    // This function seems shaaady 
    // TODO: Do something more reasonable 
    private def toDouble(x: Any): Double = { 
    x match { 
     case x: NumericType => EvilSqlTools.toDouble(x.asInstanceOf[NumericType]) 
     case x: Long => x.toDouble 
     case x: Int => x.toDouble 
     case x: Double => x 
    } 
    } 
    override def eval(input: Row): Any = { 
    if (data.isEmpty) { 
     println("No data???") 
     null 
    } else { 
     val inputAsDoubles = data.toList.map(toDouble) 
     println("computing on input "+inputAsDoubles) 
     val inputArray = inputAsDoubles.toArray 
     val apacheKurtosis = new ApacheKurtosis() 
     val result = apacheKurtosis.evaluate(inputArray, 0, inputArray.size) 
     println("result "+result) 
     Cast(Literal(result), DoubleType).eval(null) 
    } 
    } 
} 

然后,我们可以使用类似的逻辑,在星火SQL的使用functions.py实现:

"""Our magic extend functions. Here lies dragons and a sleepy holden.""" 
from py4j.java_collections import ListConverter 

from pyspark import SparkContext 
from pyspark.sql.dataframe import Column, _to_java_column 

__all__ = [] 
def _create_function(name, doc=""): 
    """ Create a function for aggregator by name""" 
    def _(col): 
     sc = SparkContext._active_spark_context 
     jc = getattr(sc._jvm.com.sparklingpandas.functions, name)(col._jc if isinstance(col, Column) else col) 
     return Column(jc) 
    _.__name__ = name 
    _.__doc__ = doc 
    return _ 

_functions = { 
    'kurtosis': 'Calculate the kurtosis, maybe!', 
} 


for _name, _doc in _functions.items(): 
    globals()[_name] = _create_function(_name, _doc) 
del _name, _doc 
__all__ += _functions.keys() 
__all__.sort() 

,然后我们可以继续前进,把它作为一个UDAF像这样:

from sparklingpandas.custom_functions import * 
import random 
input = range(1,6) + range(1,6) + range(1,6) + range(1,6) + range(1,6) + range(1,6) 
df1 = sqlContext.createDataFrame(sc.parallelize(input)\ 
            .map(lambda i: Row(single=i, rand= random.randint(0,100000)))) 
df1.collect() 
import pyspark.sql.functions as F 
x = df1.groupBy(df1.single).agg(F.min(df1.rand)) 
x.collect() 
j = df1.groupBy(df1.single).agg(kurtosis(df1.rand)) 
j.collect() 
+0

我不认为UDF解决方案工作原因,当我做以下事情:kert = udf(lambda x:kurtosis(x),FloatType())print df.select(kert(df.offer_id))。collect )不起作用,因为它分别传递每个值。你不能用它做一个.agg,所以我想用另一种方式来思考。 – theMadKing

+1

这的确如此,我实际上将Sparkling Pandas作为一个侧面项目工作,并对这种感兴趣的项目感兴趣,所以我开始了一些工作来实现对此的支持。我会更新我的答案以获得详细信息。 – Holden

+0

更新(它的很多代码主要是因为我们需要在Scala方面+ Python方面做一些事情)。 – Holden