我在R中使用h2o.randomforest在2个组上构建一个分类器,说组“A”&组“B”。作为一个例子,我生成的样本数据集随机如下所示,并转换成一个h2oframe:标签在h2o.randomforest中扮演角色吗?
a <- sample(0:1,10000,replace=T)
b <- sample(0:1,10000,replace=T)
c <- sample(1:10,10000,replace=T)
d <- sample(0:1,10000,replace=T)
e <- sample(0:1,10000,replace=T)
f <- sample(0:1,10000,replace=T)
基本上,它们将被因式分解和所有有2层,除c,其有10个levels.The第一5000行被分配标签为“A”,其余分配标签“B”。另外,我还有另一个名为nlabel的栏,其中前5000行是“B”,其余的是“A”。
这是第10行的最后10行我的数据集的:
a b c d e f label nlabel
1 0 0 5 0 1 0 A B
2 0 1 5 1 1 1 A B
3 0 0 6 0 0 1 A B
4 0 0 8 0 0 1 A B
5 1 1 1 1 1 1 A B
6 1 1 6 1 0 1 A B
7 1 0 3 1 1 1 A B
8 1 1 9 1 0 1 A B
9 1 0 8 1 0 1 A B
10 0 0 1 0 1 1 A B
.............
9991 1 1 3 0 0 1 B A
9992 0 0 7 1 0 0 B A
9993 1 0 9 0 1 1 B A
9994 0 1 3 0 0 0 B A
9995 1 1 8 0 1 0 B A
9996 0 1 8 0 1 0 B A
9997 1 1 9 0 1 0 B A
9998 0 0 5 1 0 1 B A
9999 0 1 9 1 1 0 B A
10000 0 1 10 1 0 1 B A
因为我随机生成的数据集,我没有,除了我能得到所有的好分类器(或者我可以成为世界上最幸运的人)。我排除了一些更像随机猜测的东西。下面是一个结果我得到由R中使用“随机森林”包:
> rf <- randomForest(label ~ a + b + c + e + f,
+ data = test,
ntree = 100)
> rf
Call:
randomForest(formula = label ~ a + b + c + e + f, data = test, ntree = 100)
Type of random forest: classification
Number of trees: 100
No. of variables tried at each split: 2
OOB estimate of error rate: 50.17%
Confusion matrix:
A B class.error
A 2507 2493 0.4986
B 2524 2476 0.5048
但是,通过使用与h2o.randomforest相同的数据集,我得到不同的结果。下面是我使用的代码和我得到的结果是:
> TEST <- as.h2o(test)
> rfh2o <- h2o.randomForest(y = "label",
x = c("a","b",
"c","d",
"e","f"),
training_frame = TEST,
ntrees = 100)
> rfh2o
Model Details:
==============
H2OBinomialModel: drf
Model ID: DRF_model_R_1501015614001_1029
Model Summary:
number_of_trees number_of_internal_trees model_size_in_bytes min_depth max_depth mean_depth min_leaves
1 100 100 366582 7 14 11.33000 1
max_leaves mean_leaves
1 319 286.52000
H2OBinomialMetrics: drf
** Reported on training data. **
** Metrics reported on Out-Of-Bag training samples **
MSE: 0.2574374
RMSE: 0.5073829
LogLoss: 0.7086906
Mean Per-Class Error: 0.5
AUC: 0.4943865
Gini: -0.01122696
Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
A B Error Rate
A 0 5000 1.000000 =5000/5000
B 0 5000 0.000000 =0/5000
Totals 0 10000 0.500000 =5000/10000
Maximum Metrics: Maximum metrics at their respective thresholds
metric threshold value idx
1 max f1 0.231771 0.666667 399
2 max f2 0.231771 0.833333 399
3 max f0point5 0.231771 0.555556 399
4 max accuracy 0.459704 0.506800 251
5 max precision 0.723654 0.593750 10
6 max recall 0.231771 1.000000 399
7 max specificity 0.785389 0.999800 0
8 max absolute_mcc 0.288276 0.051057 389
9 max min_per_class_accuracy 0.500860 0.488000 200
10 max mean_per_class_accuracy 0.459704 0.506800 251
Based on the result above, the confusion matrix is different from what I got from "randomForest" package.
另外,如果我用"nlabel"
代替"label"
与h2o.randomforest,我仍然有上预测A的高错误率。但在目前的模型中,A与上一个模型中的B相同。这里是代码和我得到的结果:
> rfh2o_n <- h2o.randomForest(y = "nlabel",
+ x = c("a","b",
+ "c","d",
+ "e","f"),
+ training_frame = TEST,
+ ntrees = 100)
> rfh2o_n
Model Details:
==============
H2OBinomialModel: drf
Model ID: DRF_model_R_1501015614001_1113
Model Summary:
number_of_trees number_of_internal_trees model_size_in_bytes min_depth max_depth mean_depth min_leaves
1 100 100 365232 11 14 11.18000 1
max_leaves mean_leaves
1 319 285.42000
H2OBinomialMetrics: drf
** Reported on training data. **
** Metrics reported on Out-Of-Bag training samples **
MSE: 0.2575674
RMSE: 0.507511
LogLoss: 0.7089465
Mean Per-Class Error: 0.5
AUC: 0.4923496
Gini: -0.01530088
Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
A B Error Rate
A 0 5000 1.000000 =5000/5000
B 0 5000 0.000000 =0/5000
Totals 0 10000 0.500000 =5000/10000
Maximum Metrics: Maximum metrics at their respective thresholds
metric threshold value idx
1 max f1 0.214495 0.666667 399
2 max f2 0.214495 0.833333 399
3 max f0point5 0.214495 0.555556 399
4 max accuracy 0.617230 0.506600 74
5 max precision 0.621806 0.541833 70
6 max recall 0.214495 1.000000 399
7 max specificity 0.749866 0.999800 0
8 max absolute_mcc 0.733630 0.042465 6
9 max min_per_class_accuracy 0.499186 0.486400 201
10 max mean_per_class_accuracy 0.617230 0.506600 74
这样的结果让我怀疑标签是否在h2o.randomforest中发挥作用。 我不经常使用h2o,但上面的结果让我很困惑。这仅仅是由于可能性,还是我犯了一些愚蠢的错误,或者是其他的错误?