-1

数据集源https://archive.ics.uci.edu/ml/datasets/wine为什么50-50火车/测试分裂的工作最适合这个神经网络的数据集的178个观察?

完整的源代码(需要与NumPy,Python 3中)https://github.com/nave01314/NNClassifier

从我读过的东西,看来80%左右,培训20%VA的分裂验证数据接近最优。随着测试数据集大小的增加,验证结果的差异应以降低有效训练为代价降低(验证准确度较低)。

因此,我很困惑以下结果似乎显示最佳的准确性和低的方差与TEST_SIZE=0.5(每个试验运行多次,并选择一个试验来表示不同的测试大小)。

TEST_SIZE=0.1,这应该是由于大尺寸训练有效地工作,但具有较大的方差(5次试验为16%和50%的准确度之间变化)。

Epoch  0, Loss 0.021541, Targets [ 1. 0. 0.], Outputs [ 0.979 0.011 0.01 ], Inputs [ 0.086 0.052 0.08 0.062 0.101 0.093 0.107 0.058 0.108 0.08 0.084 0.115 0.104] 
Epoch 100, Loss 0.001154, Targets [ 0. 0. 1.], Outputs [ 0.  0.001 0.999], Inputs [ 0.083 0.099 0.084 0.079 0.085 0.061 0.02 0.103 0.038 0.083 0.078 0.053 0.067] 
Epoch 200, Loss 0.000015, Targets [ 0. 0. 1.], Outputs [ 0. 0. 1.], Inputs [ 0.076 0.092 0.087 0.107 0.077 0.063 0.02 0.13 0.054 0.106 0.054 0.051 0.086] 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
50.0% overall accuracy for validation set. 

TEST_SIZE=0.5,这应该工作没关系(精度其它两种情况之间) - 5个试验92和97%的准确度之间变化出于某种原因

Epoch  0, Loss 0.547218, Targets [ 1. 0. 0.], Outputs [ 0.579 0.087 0.334], Inputs [ 0.106 0.08 0.142 0.133 0.129 0.115 0.127 0.13 0.12 0.068 0.123 0.126 0.11 ] 
Epoch 100, Loss 0.002716, Targets [ 0. 1. 0.], Outputs [ 0.003 0.997 0. ], Inputs [ 0.09 0.059 0.097 0.114 0.088 0.108 0.102 0.144 0.125 0.036 0.186 0.113 0.054] 
Epoch 200, Loss 0.002874, Targets [ 0. 1. 0.], Outputs [ 0.003 0.997 0. ], Inputs [ 0.102 0.067 0.088 0.109 0.088 0.097 0.091 0.088 0.092 0.056 0.113 0.141 0.089] 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 0 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 0 
Target Class 1, Predicted Class 1 
97.75280898876404% overall accuracy for validation set. 

TEST_SIZE=0.9,这应该很差概括由于小训练样本 - 5试验38%和54%的准确度之间变化。

Epoch  0, Loss 2.448474, Targets [ 0. 0. 1.], Outputs [ 0.707 0.206 0.086], Inputs [ 0.229 0.421 0.266 0.267 0.223 0.15 0.057 0.33 0.134 0.148 0.191 0.12 0.24 ] 
Epoch 100, Loss 0.017506, Targets [ 1. 0. 0.], Outputs [ 0.983 0.017 0. ], Inputs [ 0.252 0.162 0.274 0.255 0.241 0.275 0.314 0.175 0.278 0.135 0.286 0.36 0.281] 
Epoch 200, Loss 0.001819, Targets [ 0. 0. 1.], Outputs [ 0.002 0.  0.998], Inputs [ 0.245 0.348 0.248 0.274 0.284 0.153 0.167 0.212 0.191 0.362 0.145 0.125 0.183] 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 2, Predicted Class 2 
Target Class 0, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 0, Predicted Class 1 
Target Class 1, Predicted Class 1 
Target Class 2, Predicted Class 2 
64.59627329192547% overall accuracy for validation set. 
  • 主要功能如下片段:

导入和拆分数据集

import numpy as np 
from sklearn.preprocessing import normalize 
from sklearn.model_selection import train_test_split 


    def readInput(filename, delimiter, inputlen, outputlen, categories, test_size): 
     def onehot(num, categories): 
      arr = np.zeros(categories) 
      arr[int(num[0])-1] = 1 
      return arr 

     with open(filename) as file: 
      inputs = list() 
      outputs = list() 
      for line in file: 
       assert(len(line.split(delimiter)) == inputlen+outputlen) 
       outputs.append(onehot(list(map(lambda x: float(x), line.split(delimiter)))[:outputlen], categories)) 
       inputs.append(list(map(lambda x: float(x), line.split(delimiter)))[outputlen:outputlen+inputlen]) 
     inputs = np.array(inputs) 
     outputs = np.array(outputs) 

     inputs_train, inputs_val, outputs_train, outputs_val = train_test_split(inputs, outputs, test_size=test_size) 
     assert len(inputs_train) > 0 
     assert len(inputs_val) > 0 

     return normalize(inputs_train, axis=0), outputs_train, normalize(inputs_val, axis=0), outputs_val 

一些参数

import numpy as np 
import helper 

FILE_NAME = 'data2.csv' 
DATA_DELIM = ',' 
ACTIVATION_FUNC = 'tanh' 
TESTING_FREQ = 100 
EPOCHS = 200 
LEARNING_RATE = 0.2 
TEST_SIZE = 0.9 

INPUT_SIZE = 13 
HIDDEN_LAYERS = [5] 
OUTPUT_SIZE = 3 

主程序流程

def step(self, x, targets, lrate): 
     self.forward_propagate(x) 
     self.backpropagate_errors(targets) 
     self.adjust_weights(x, lrate) 

    def test(self, epoch, x, target): 
     predictions = self.forward_propagate(x) 
     print('Epoch %5i, Loss %2f, Targets %s, Outputs %s, Inputs %s' % (epoch, helper.crossentropy(target, predictions), target, predictions, x)) 

    def train(self, inputs, targets, epochs, testfreq, lrate): 
     xindices = [i for i in range(len(inputs))] 
     for epoch in range(epochs): 
      np.random.shuffle(xindices) 
      if epoch % testfreq == 0: 
       self.test(epoch, inputs[xindices[0]], targets[xindices[0]]) 
      for i in xindices: 
       self.step(inputs[i], targets[i], lrate) 
     self.test(epochs, inputs[xindices[0]], targets[xindices[0]]) 

    def validate(self, inputs, targets): 
     correct = 0 
     targets = np.argmax(targets, axis=1) 
     for i in range(len(inputs)): 
      prediction = np.argmax(self.forward_propagate(inputs[i])) 
      if prediction == targets[i]: correct += 1 
      print('Target Class %s, Predicted Class %s' % (targets[i], prediction)) 
     print('%s%% overall accuracy for validation set.' % (correct/len(inputs)*100)) 


np.random.seed() 

inputs_train, outputs_train, inputs_val, outputs_val = helper.readInput(FILE_NAME, DATA_DELIM, inputlen=INPUT_SIZE, outputlen=1, categories=OUTPUT_SIZE, test_size=TEST_SIZE) 
nn = Classifier([INPUT_SIZE] + HIDDEN_LAYERS + [OUTPUT_SIZE], ACTIVATION_FUNC) 

nn.train(inputs_train, outputs_train, EPOCHS, TESTING_FREQ, LEARNING_RATE) 

nn.validate(inputs_val, outputs_val) 
+1

80/20拆分是不是在所有情况下的最佳。这取决于你的数据。这将有助于再次测试你的假设,但这次洗牌数据集。 – ninesalt

+2

不幸的是,这样的问题没有一个明确的答案,特别是没有对数据的访问。 –

+0

Coldspeed,我提供了一个数据集(编辑)。 Swailem95,数据集在每个时期洗牌,(我相信)分裂之前(见http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) –

回答

2

1)将样品大小是非常小的。你有13个维度,只有178个样本。由于您需要训练5层神经网络的参数,无论您如何分割,只是没有足够的数据。所以你的模型对你所拥有的数据量来说太复杂了,这会导致过度拟合。这意味着,您的模型不能很好地推广,并且在一般情况下不会为您提供良好的结果,并且不会提供稳定的结果。

2)你总是会有的培训和测试数据集之间存在一些差异。在你的情况下,由于样本量很小,你的测试和训练数据的统计数据之间的一致性通常是随机的。

3)当你分割90-10,你的测试集只有17个样品。只有17项试验无法获得太多价值。它几乎不能称为“统计”。尝试不同的分裂,以及你的结果也将发生变化(你已经看到的现象,正如我上面提到的关于稳健性)

4)总是比你分类器进行随机分类器性能。在你参加3个班的情况下,你至少应该达到33%以上。

5)阅读关于交叉验证和留一出。

相关问题