2
我意识到newff输出固定在范围[-1,1],我做了以下测试,应该如何输出范围外的工作。Neurolab newff输出范围和来自网络的不同结果
import neurolab as nl
import numpy as np
# Create train samples
x = np.linspace(-7, 7, 20)
y = x * 10
size = len(x)
inp = x.reshape(size,1)
tar = y.reshape(size,1)
norm_inp = nl.tool.Norm(inp)
inp = norm_inp(inp)
norm_tar = nl.tool.Norm(tar)
tar = norm_tar(tar)
# Create network with 2 layers and random initialized
# as I normalized the inp, the input range is set to [0, 1] (BTW, I don't know how
#to norm it to [-1, 1])
net = nl.net.newff([[0, 1]],[5, 1])
# Train network
error = net.train(inp, tar, epochs=500, show=100, goal=0.02)
# Simulate network
out = norm_tar.renorm(net.sim([[ 0.21052632 ]]))
print "final output:-----------------"
print out
inp before norm
[[-7. ]
[-6.26315789]
[-5.52631579]
[-4.78947368]
[-4.05263158]
[-3.31578947]
[-2.57894737]
[-1.84210526]
[-1.10526316]
[-0.36842105]
[ 0.36842105]
[ 1.10526316]
[ 1.84210526]
[ 2.57894737]
[ 3.31578947]
[ 4.05263158]
[ 4.78947368]
[ 5.52631579]
[ 6.26315789]
[ 7. ]]
tar before norm
[[-70. ]
[-62.63157895]
[-55.26315789]
[-47.89473684]
[-40.52631579]
[-33.15789474]
[-25.78947368]
[-18.42105263]
[-11.05263158]
[ -3.68421053]
[ 3.68421053]
[ 11.05263158]
[ 18.42105263]
[ 25.78947368]
[ 33.15789474]
[ 40.52631579]
[ 47.89473684]
[ 55.26315789]
[ 62.63157895]
[ 70. ]]
我想到了要重归一化后,各地-40为输入0.21052632 但结果是不可重复的,有时是正确的(约-40),但有时是错误的(成为-70)。
我很奇怪,为什么训练结果并不稳定,是否有更好的方式来训练产生的输出值超出范围[-1,1]
虽然这个链接可能回答这个问题,但最好在这里包含答案的基本部分并提供参考链接。如果链接页面更改,则仅链接答案可能会失效。 – apaul
想你@ apaul34208,这是我第一次回答一个问题。我对它做了一些改变。 –