我需要约束我的损失,以便预测总是正面的。 所以我有:scipy最小化不等式约束函数
x = [1.0,0.64,0.36,0.3,0.2]
y = [1.0,0.5,0.4,-0.1,-0.2]
alpha = 0
def loss(w, x, y, alpha):
loss = 0.0
for y_i,x_i in zip(y,x):
loss += ((y_i - np.dot(w,x_i)) ** 2)
return loss + alpha * math.sqrt(np.dot(w,w))
res = minimize(loss_new_scipy, 0.0, args=(x, y, alpha))
现在,我想补充的约束,但我发现大部分的约束x是在界限之间,不np.dot(w,x)>= 0
怎么会这样的限制是什么样子?
编辑: 我想用在scipy.optimize.minimize功能的限制参数,所以我觉得它应该以某种方式是这样的:
def con(w,x):
loss = 0.0
for i_x in x:
loss += (np.dot(w, i_x))
return loss
cons = ({'type': 'ineq', 'fun': con})
res = minimize(loss_new_scipy, 0.0, args=(x, y, alpha), constraints=cons)
还我删除了简单
第二约束EDIT2: 我改变了我的问题如下:约束为w * X必须大于1,并且也改变了目标,所有的底片。我也改变了ARGS,所以它现在运行:
x = np.array([1.0,0.64,0.36,0.3,0.2])
y = [-1.0,-0.5,-0.4,-0.1,-0.2]
alpha = 0
def con(w,x,y,alpha):
print np.array(w*x)
return np.array((w*x)-1).sum()
cons = ({'type': 'ineq', 'fun': con,'args':(x,y,alpha)})
def loss_new_scipy(w, x, y, alpha):
loss = 0.0
for y_i,x_i in zip(y,x):
loss += ((y_i - np.dot(w,x_i)) ** 2)
return loss + alpha * math.sqrt(np.dot(w,w))
res = minimize(loss_new_scipy, np.array([1.0]), args=(x, y, alpha),constraints=cons)
print res
但不幸的是W上的结果是2.0,这的确是积极的,看起来像约束帮助,因为它是远离函数拟合目标,但预测W * X不全部在1.0以上
EDIT3: 我才意识到,我的预测的总和 - 1等于0了,但我想每个预测为大于1.0 所以以w = 2.0,
w*x = [ 2.00000001 1.28000001 0.72 0.6 0.4 ]
和
(w*x) - 1 = [ 1.00000001 0.28000001 -0.28 -0.4 -0.6 ]
其总和等于0.0,但我想所有的预测w*x
要大于1.0,所以在w*x
所有5个值至少应为1.0