2016-07-25 32 views
7

我有下面形成一个简单的规则为基础的分类数据集,其中的代码失败:为什么神经网络在简单的classifcation情况

# # Data preparation 
data = data.frame(A = round(runif(100)), B = round(runif(100)), C = round(runif(100))) 
# Y - is the classification output column 
data$Y = ifelse((data$A == 1 & data$B == 1 & data$C == 0), 1, ifelse((data$A == 0 & data$B == 1 & data$C == 1), 1, ifelse((data$A == 0 & data$B ==0 & data$C == 0), 1, 0))) 
# Shuffling the data set 
data = data[sample(rownames(data)), ] 

我划分了数据集分成训练和测试,这样我可以验证在测试组我的结果:

# # Divide into train and test 
library(caret) 
trainIndex = createDataPartition(data[, "Y"], p = .7, list = FALSE, times = 1) # for balanced sampling 
train = data[trainIndex, ] 
test = data[-trainIndex, ] 

我试图建立一个简单的神经网络与神经元的隐含层的数目通过循环(如所提到的here)被选择

# # Build a neural net 
library(neuralnet) 
for(alpha in 2:10) 
{ 
    nHidden = round(nrow(train)/(alpha*(3+1))) 
    nn = neuralnet(Y ~ A + B + C, train, linear.output = F, likelihood = T, err.fct = "ce", hidden = nHidden) 

    # Calculate Mean Squared Error for Train and Test 
    trainMSE = mean((round(nn$net.result[[1]]) - train$Y)^2) 
    testPred = round(compute(nn,test[-length(ncol(test))])$net.result) 
    testMSE = mean((testPred - test$Y)^2) 

    print(paste("Train Error: " , round(trainMSE, 4), ", Test Error: ", round(testMSE, 4), ", #. Hidden = ", nHidden, sep = "")) 
} 

[1] "Train Error: 0, Test Error: 0.6, #. Hidden = 9"

[1] "Train Error: 0, Test Error: 0.6, #. Hidden = 6"

[1] "Train Error: 0, Test Error: 0.6, #. Hidden = 4"

[1] "Train Error: 0, Test Error: 0.6, #. Hidden = 4"

[1] "Train Error: 0.1429, Test Error: 0.8333, #. Hidden = 3"

[1] "Train Error: 0.1429, Test Error: 0.8333, #. Hidden = 2"

[1] "Train Error: 0.0857, Test Error: 0.6, #. Hidden = 2"

[1] "Train Error: 0.1429, Test Error: 0.8333, #. Hidden = 2"

[1] "Train Error: 0.0857, Test Error: 0.6, #. Hidden = 2"

这是给穷人过拟合的结果。但是,当我在同一个数据集上构建了一个简单的随机森林。我得到的训练和测试误差为 - 0

# # Build a Random Forest 
trainRF = train 
trainRF$Y = as.factor(trainRF$Y) 
testRF = test 

library(randomForest) 
rf = randomForest(Y ~ ., data = trainRF, mtry = 2) 

# Calculate Mean Squared Error for Train and Test 
trainMSE = mean((round(rf$votes[,2]) - as.numeric(as.character(trainRF$Y)))^2) 
testMSE = mean((round(predict(rf, testRF, type = "prob")[,2]) - as.numeric(as.character(testRF$Y)))^2) 

print(paste("Train Error: " , round(trainMSE, 4), ", Test Error: ", round(testMSE, 4), sep = "")) 

[1] "Train Error: 0, Test Error: 0"

请帮助我理解为什么神经网络是失败在一个简单的情况下随机森林正在与100%的准确率。我只使用了一个隐藏层(假设一个隐藏层对于这样简单的分类就足够了),并对隐藏层中的神经元数进行迭代。

另外,如果我对神经网络参数的理解是错误的,请帮助我。

完整代码,可以发现here

回答

1

类似的问题已经被我猎取了一段时间,所以我试图理解你的数据和问题,他们比我的。最后,虽然,这只是一个小bug在这一行:

testPred = round(compute(nn,test[-length(ncol(test))])$net.result) 

您选择BCY进行预测,而不是ABC,因为length(ncol(something))总是返回1.你只想要test[-ncol(test)]

> summary(test[-length(ncol(test))]) 

      B    C    Y    
Min. :0.00 Min. :0.0 Min. :0.0000000 
1st Qu.:0.00 1st Qu.:0.0 1st Qu.:0.0000000 
Median :0.00 Median :0.5 Median :0.0000000 
Mean :0.48 Mean :0.5 Mean :0.3766667 
3rd Qu.:1.00 3rd Qu.:1.0 3rd Qu.:1.0000000 
Max. :1.00 Max. :1.0 Max. :1.0000000 
+0

太好了,我想知道我是如何错过这个小错误的。现在感谢它在'Train'和'Test'设置中产生'0 MSE'。 –

+0

你刚才提到**'一个类似的问题一直在狩猎我'**。如果能帮助我提高对神经网络的理解,我能否了解您的问题? –

+1

当然。我想我会为此写一篇单独的文章。我会告诉你。从本质上讲,我有一个简单的回归问题,随机地,它几乎完美地建模,而在其他时候则相当糟糕。这显然与权重的随机初始化有关,但这就是我所知道的。 – sebastianmm