我正在写这个代码用于学习ANN(多层反向传播)的过程,但是学习的结果非常糟糕,它在任何时候都不是1我知道我们不能提供任何保证让学习成功,但我想知道我是否在此代码中出现错误,或者如果我可以使这些步骤具有更高的性能。在Matlab中学习ANN(多层反向传播)
步骤:
1-上传我的数据集
2-选择从225 170行的学习和其余50行测试(随机地)
3-创建用于输入权重和隐藏层随机和1之间的0
4- -1之间,用于隐藏层和输出层创建偏置随机和1
5-找到输出的每一行的每个输出
6-查找误差然后在每次迭代
8-每个隐藏层
7-更新权重和偏置阵列计算平方误差(MSE)的总和在每次迭代。
每个输出总是在0.2到0.5之间的结果也不适用于所需的输出。 这是什么在我的逻辑或在我的代码在这里可能的错误!
注: 170行(225行与108列,并用25个结果如类设定我使用的数据)用于学习 55行测试
2- 50000次迭代
1-3-学习率0.3
4-动量= 0.7
5-隐藏层NE。没有= 90
代码:
%Initialize the weight matrices with random weights
V = rand(inlayer,hlayer); % Weight matrix from Input to Hidden between [0,1]
W = rand(hlayer,olayer); % Weight matrix from Hidden to Output between [0,1]
%Initialize the theta matrices for hidden and output layers
Thetahidden = randi(1,hlayer);
Thetaoutput = randi(1,olayer);
for i=1:iteration
for j=1:170 % depends on training data set
%This for output between input-hidden
for h=1:hlayer % depends on neuron number at hidden layer
sum = 0;
for k=1:108 % depends on column number
sum = sum + (V(k,h)* trainingdata(j,k));
end
H(h) = sum + Thetahidden(h);
Oh(h) = 1/(1+exp(-H(h)));
end
%This for output between hidden-output
for o=1:olayer % depends on number of output layer
sumO = 0;
for hh=1:hlayer
sumO = sumO+W(hh,o)*Oh(hh);
end
O(o)=sumO + Thetaoutput(o);
OO(o) = 1/(1+exp(-O(o)));
finaloutputforeachrow(j,o)= OO(o);
end
% Store real value of real output
for r=1:170
for o=1:olayer
i=outputtrainingdata(r);
if i == o
RO(r,o)=1;
else
RO(r,o)=0;
end
end
end
sumerror =0;
% Compute Error (output layer)
for errorout=1:olayer
lamdaout(errorout) = OO(errorout)*(1-OO(errorout))*(RO(j,errorout)-OO(errorout));
errorrate = RO(j,errorout)-OO(errorout);
sumerror = sumerror+(errorrate^2);
FinalError(j,errorout) = errorrate;
% Compute Error (hidden layer)
ersum=0;
for errorh=1:hlayer
ersum= lamdaout(errorout)*W(errorh,errorout);
lamdahidden(errorh)= Oh(errorh)*(1-Oh(errorh))*ersum;
end
FinalSumError(j) = (1/2)*sumerror;
end
%update weights between input and hidden layer
for h=1:hlayer
for k=1:108
deltaw(k,h) = learningrate*lamdahidden(h)*trainingdata(j,k);
V(k,h) = (m*V(k,h)) + deltaw(k,h);
end
end
%update weights/Theta between hidden and output layer
for h=1:hlayer
for outl=1:olayer
%weight
deltaw2(h,outl) = learningrate * lamdaout(outl)*Oh(h);
W(h,outl)= (m*W(h,outl))+deltaw2(h,outl);
end
end
for h=1:hlayer
%Theta-Hidden
deltaHiddenTh(h) = learningrate * lamdahidden(h);
Thetahidden(h) = (m*Thetahidden(h)) + deltaHiddenTh(h);
end
for outl=1:olayer
%Theta-Output
deltaOutputTh(outl) = learningrate * lamdaout(outl);
Thetaoutput(outl) = (m*Thetaoutput(outl)) + deltaOutputTh(outl);
end
end
end