2016-12-05 137 views
0

我在想什么是Matlab中最快的凸优化器,或者有什么方法来加速当前求解器?我正在使用CVX,但它永久解决了我的优化问题。 我有优化是解决Matlab中的快速CVX求解器

minimize norm(Ax-b, 2) 
subject to 
x >= 0 
and x d <= delta 

其中A的大小,B是非常大的。

有没有什么办法可以通过最小二乘法求解,然后将其转移到约束版本以使其更快?

+0

'norm(Ax,b)'对我来说看起来很奇怪。你的意思是“标准(Ax-b,2)”吗? –

+0

'x.d'是什么意思? – littleO

+0

d是另一个向量。理想情况下,我希望x和d是正交的,它是由delta值控制的。 – Erin

回答

0

我不知道什么x.d <= delta手段,但我就认为它应该是x <= delta

可以使用投影梯度方法或加速投影梯度方法(这是投影梯度方法,其中“神奇”收敛速度更快的只是轻微的修改)解决了这个问题。这里有一些python代码展示了如何最小化.5 ||斧 - B ||^2受约束0 < = X < =使用FISTA,这是一个加速的投影梯度方法增量。关于投影梯度方法和FISTA的更多细节可以在例如在近端算法Boyd的manuscript找到。

import numpy as np 
import matplotlib.pyplot as plt 

def fista(gradf,proxg,evalf,evalg,x0,params): 
    # This code does FISTA with line search 

    maxIter = params['maxIter'] 
    t = params['stepSize'] # Initial step size 
    showTrigger = params['showTrigger'] 

    increaseFactor = 1.25 
    decreaseFactor = .5 

    costs = np.zeros((maxIter,1)) 

    xkm1 = np.copy(x0) 
    vkm1 = np.copy(x0) 

    for k in np.arange(1,maxIter+1,dtype = np.double): 

     costs[k-1] = evalf(xkm1) + evalg(xkm1) 
     if k % showTrigger == 0: 
      print "Iteration: " + str(k) + " cost: " + str(costs[k-1]) 

     t = increaseFactor*t 

     acceptFlag = False 
     while acceptFlag == False: 
      if k == 1: 
       theta = 1 
      else: 
       a = tkm1 
       b = t*(thetakm1**2) 
       c = -t*(thetakm1**2) 
       theta = (-b + np.sqrt(b**2 - 4*a*c))/(2*a) 

      y = (1 - theta)*xkm1 + theta*vkm1 
      (gradf_y,fy) = gradf(y) 
      x = proxg(y - t*gradf_y,t) 
      fx = evalf(x) 
      if fx <= fy + np.vdot(gradf_y,x - y) + (.5/t)*np.sum((x - y)**2): 
       acceptFlag = True 
      else: 
       t = decreaseFactor*t 

     tkm1 = t 
     thetakm1 = theta 
     vkm1 = xkm1 + (1/theta)*(x - xkm1) 
     xkm1 = x 

    return (xkm1,costs) 


if __name__ == '__main__': 

    delta = 5.0 
    numRows = 300 
    numCols = 50 
    A = np.random.randn(numRows,numCols) 
    ATrans = np.transpose(A) 
    xTrue = delta*np.random.rand(numCols,1) 
    b = np.dot(A,xTrue) 
    noise = .1*np.random.randn(numRows,1) 
    b = b + noise 

    def evalf(x): 
     AxMinusb = np.dot(A, x) - b 
     val = .5 * np.sum(AxMinusb ** 2) 
     return val 

    def gradf(x): 
     AxMinusb = np.dot(A, x) - b 
     grad = np.dot(ATrans, AxMinusb) 
     val = .5 * np.sum(AxMinusb ** 2) 
     return (grad, val) 

    def evalg(x): 
     return 0.0 

    def proxg(x,t): 
     return np.maximum(np.minimum(x,delta),0.0) 

    x0 = np.zeros((numCols,1)) 
    params = {'maxIter': 500, 'stepSize': 1.0, 'showTrigger': 5} 
    (x,costs) = fista(gradf,proxg,evalf,evalg,x0,params) 

    plt.figure() 
    plt.plot(x) 
    plt.plot(xTrue) 

    plt.figure() 
    plt.semilogy(costs)