2017-04-22 50 views
7

我想在numpy中实现以下问题,这里是我的代码。如何优化numpy中此函数的计算?

我试过以下numpy代码,用于循环的这个问题。我想知道是否有更有效的方法来做这种计算?我真的很感激!

k, d = X.shape 
m = Y.shape[0] 

c1 = 2.0*sigma**2 
c2 = 0.5*np.log(np.pi*c1) 
c3 = np.log(1.0/k) 

L_B = np.zeros((m,)) 
for i in xrange(m): 
    if i % 100 == 0: 
     print i 
    L_B[i] = np.log(np.sum(np.exp(np.sum(-np.divide(
       np.power(X-Y[i,:],2), c1)-c2,1)+c3))) 

print np.mean(L_B) 

我已经通过创建一个三维张量,因此下面的计算可以通过广播进行想到np.expand_dims(X, 2).repeat(Y.shape[0], 2)-Y,但是当m是大,会浪费大量的内存。

我也相信,np.einsum()只利用for循环,所以可能效率不高,纠正我,如果我错了。

有没有想法?

回答

5

#1

我使用多圈代码的直接转换到broadcasting优化的第一级的基于一个在引入新的轴线,因此不那么存储器高效的,如下面列出的优化阶段 -

p1 = (-((X[:,None] - Y)**2)/c1)-c2 
p11 = p1.sum(2) 
p2 = np.exp(p11+c3) 
out = np.log(p2.sum(0)).mean() 

优化阶段#2

在考虑一些优化饲养日瞻在我们打算分离出的常量的操作,我结束了以下 -

c10 = -c1 
c20 = X.shape[1]*c2 

subs = (X[:,None] - Y)**2 
p00 = subs.sum(2) 
p10 = p00/c10 
p11 = p10-c20 
p2 = np.exp(p11+c3) 
out = np.log(p2.sum(0)).mean() 

优化阶段#3

它进一步去和和看到过的地方的操作可能优化,我结束了使用Scipy's cdist来取代平方和sum-reduction的重量级的工作。这应该是相当的内存使用效率,给我们最终的实现,如下图所示 -

from scipy.spatial.distance import cdist 

# Setup constants 
c10 = -c1 
c20 = X.shape[1]*c2 
c30 = c20-c3 
c40 = np.exp(c30) 
c50 = np.log(c40) 

# Get stagewise operations corresponding to loopy ones 
p1 = cdist(X, Y, 'sqeuclidean') 
p2 = np.exp(p1/c10).sum(0) 
out = np.log(p2).mean() - c50 

运行测试

途径 -

def loopy_app(X, Y, sigma): 
    k, d = X.shape 
    m = Y.shape[0] 

    c1 = 2.0*sigma**2 
    c2 = 0.5*np.log(np.pi*c1) 
    c3 = np.log(1.0/k) 

    L_B = np.zeros((m,)) 
    for i in xrange(m): 
     L_B[i] = np.log(np.sum(np.exp(np.sum(-np.divide(
        np.power(X-Y[i,:],2), c1)-c2,1)+c3))) 

    return np.mean(L_B) 

def vectorized_app(X, Y, sigma): 
    # Setup constants 
    k, d = D_A.shape 
    c1 = 2.0*sigma**2 
    c2 = 0.5*np.log(np.pi*c1) 
    c3 = np.log(1.0/k) 

    c10 = -c1 
    c20 = X.shape[1]*c2 
    c30 = c20-c3 
    c40 = np.exp(c30) 
    c50 = np.log(c40) 

    # Get stagewise operations corresponding to loopy ones 
    p1 = cdist(X, Y, 'sqeuclidean') 
    p2 = np.exp(p1/c10).sum(0) 
    out = np.log(p2).mean() - c50 
    return out 

时序和验证 -

In [294]: # Setup inputs with m(=D_B.shape[0]) being a large number 
    ...: X = np.random.randint(0,9,(100,10)) 
    ...: Y = np.random.randint(0,9,(10000,10)) 
    ...: sigma = 2.34 
    ...: 

In [295]: np.allclose(loopy_app(X, Y, sigma),vectorized_app(X, Y, sigma)) 
Out[295]: True 

In [296]: %timeit loopy_app(X, Y, sigma) 
1 loops, best of 3: 225 ms per loop 

In [297]: %timeit vectorized_app(X, Y, sigma) 
10 loops, best of 3: 23.6 ms per loop 

In [298]: # Setup inputs with m(=Y.shape[0]) being a much large number 
    ...: X = np.random.randint(0,9,(100,10)) 
    ...: Y = np.random.randint(0,9,(100000,10)) 
    ...: sigma = 2.34 
    ...: 

In [299]: np.allclose(loopy_app(X, Y, sigma),vectorized_app(X, Y, sigma)) 
Out[299]: True 

In [300]: %timeit loopy_app(X, Y, sigma) 
1 loops, best of 3: 2.27 s per loop 

In [301]: %timeit vectorized_app(X, Y, sigma) 
1 loops, best of 3: 243 ms per loop 

约在10x加速!

+0

令人惊叹!超过10倍! – xxx222

+0

@ xxx222你对实际数据集的加速度是多​​少? – Divakar

+0

约20倍左右,因为我有一个非常大的数据集,所以计算距离矩阵变得困难。 – xxx222