1

我已经在Java中编写了一个简单的人工神经网络作为项目的一部分。当我开始训练数据时(使用我收集的训练集)每个时期的错误计数快速稳定(达到约30%的准确性),然后停止。在测试ANN时,对于任何给定输入的所有输出都完全相同。神经网络为每个输入返回相同的输出

我试图输出0和1之间的数字(0到股票归类为输家和1至立管分类 - 0.4-0.6应注明稳定性)

当在同一训练数据为RapidMiner工作室创建了一个合适的人工神经网络,精度更高(70 +%),因此我知道数据集很好。 ANN逻辑中必定存在一些问题。

下面是运行和调整权重的代码。任何和所有帮助表示赞赏!

public double[] Run(double[] inputs) { 
    //INPUTS 
    for (int i = 0; i < inputNeurons.length; i++) { 
     inputNeurons[i] = inputs[i]; 
    } 

    for (int i = 0; i < hiddenNeurons.length; i++) { 
     hiddenNeurons[i] = 0; 
    } //RESET THE HIDDEN NEURONS 

    for (int e = 0; e < inputNeurons.length; e++) { 
     for (int i = 0; i < hiddenNeurons.length; i++) { 
      //Looping through each input neuron connected to each hidden neuron 

      hiddenNeurons[i] += inputNeurons[e] * inputWeights[(e * hiddenNeurons.length) + i]; 
      //Summation (with the adding of neurons) - Done by taking the sum of each (input * connection weight) 
      //The more weighting a neuron has the more "important" it is in decision making 
     } 
    } 

    for (int j = 0; j < hiddenNeurons.length; j++) { 
     hiddenNeurons[j] = 1/(1 + Math.exp(-hiddenNeurons[j])); 
     //sigmoid function transforms the output into a real number between 0 and 1 
    } 

    //HIDDEN 
    for (int i = 0; i < outputNeurons.length; i++) { 
     outputNeurons[i] = 0; 
    } //RESET THE OUTPUT NEURONS 

    for (int e = 0; e < hiddenNeurons.length; e++) { 
     for (int i = 0; i < outputNeurons.length; i++) { 
      //Looping through each hidden neuron connected to each output neuron 

      outputNeurons[i] += hiddenNeurons[e] * hiddenWeights[(e * outputNeurons.length) + i]; 
      //Summation (with the adding of neurons) as above 
     } 
    } 

    for (int j = 0; j < outputNeurons.length; j++) { 
     outputNeurons[j] = 1/(1 + Math.exp(-outputNeurons[j])); //sigmoid function as above 
    } 

    double[] outputs = new double[outputNeurons.length]; 
    for (int j = 0; j < outputNeurons.length; j++) { 
     //Places all output neuron values into an array 
     outputs[j] = outputNeurons[j]; 
    } 
    return outputs; 
} 

public double[] CalculateErrors(double[] targetValues) { 
    //Compares the given values to the actual values 
    for (int k = 0; k < outputErrors.length; k++) { 
     outputErrors[k] = targetValues[k] - outputNeurons[k]; 
    } 
    return outputErrors; 
} 

    public void tuneWeights() //Back Propagation 
{ 
    // Start from the end - From output to hidden 
    for (int p = 0; p < this.hiddenNeurons.length; p++)  //For all Hidden Neurons 
    { 
     for (int q = 0; q < this.outputNeurons.length; q++) //For all Output Neurons 
     { 
      double delta = this.outputNeurons[q] * (1 - this.outputNeurons[q]) * this.outputErrors[q]; 
      //DELTA is the error for the output neuron q 
      this.hiddenWeights[(p * outputNeurons.length) + q] += this.learningRate * delta * this.hiddenNeurons[p]; 
      /*Adjust the particular weight relative to the error 
      *If the error is large, the weighting will be decreased 
      *If the error is small, the weighting will be increased 
      */ 
     } 
    } 

    // From hidden to inps -- Same as above 
    for (int i = 0; i < this.inputNeurons.length; i++)  //For all Input Neurons 
    { 
     for (int j = 0; j < this.hiddenNeurons.length; j++) //For all Hidden Neurons 
     { 
      double delta = this.hiddenNeurons[j] * (1 - this.hiddenNeurons[j]); 
      double x = 0;  //We do not have output errors here so we must use extra data from Output Neurons 
      for (int k = 0; k < this.outputNeurons.length; k++) { 
       double outputDelta = this.outputNeurons[k] * (1 - this.outputNeurons[k]) * this.outputErrors[k]; 
       //We calculate the output delta again 
       x = x + outputDelta * this.hiddenWeights[(j * outputNeurons.length) + k]; 
       //We then calculate the error based on the hidden weights (x is used to add the error values of all weights) 
       delta = delta * x; 
      } 
      this.inputWeights[(i * hiddenNeurons.length) + j] += this.learningRate * delta * this.inputNeurons[i]; 
      //Adjust weight like above 
     } 
    } 
} 
+1

你如何初始化的权重?开始时他们不都是0吗? –

+0

尝试使用相同的结果,目前他们正在-1和1之间进行随机初始化。 –

+0

如果您使用0初始化它们,您可能会得到这样的效果(即您的网络只会停留在此配置中)。它现在工作吗(随机初始化后)? –

回答

1

长coversation后,我认为你可以找到一个回答你的问题有以下几点:

  1. 偏差是非常重要的。其实 - 关于神经网络最流行的SO问题之一是关于偏见:): Role of Bias in Neural Networks
  2. 你应该照顾你的学习过程。跟踪测试的准确性和验证集并在训练期间使用适当的学习率是很好的。我建议你使用简单的数据集,当你知道很容易找到真正的解决方案(例如 - 一个三角形或正方形 - 然后使用4-5个隐藏单位)。我也建议你使用以下playgroud:

http://playground.tensorflow.org/#activation=tanh&batchSize=10&dataset=circle&regDataset=reg-plane&learningRate=0.03&regularizationRate=0&noise=0&networkShape=4,2&seed=0.36368&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification