- 我的数据集有
42000
行 - 的一个命令来创建数据集的木里,分裂,我需要的数据集的
60%, 20% and 20%
分裂分成training, cross-validation and test
套。这是根据Andrew Ng教授在他的ml级讲座中的建议。 - 我意识到scikit-learn有一个方法train_test_split来做到这一点。但我不能让它工作,使我得到了分裂为
0.6, 0.2, 0.2
在一个衬命令
# split data into training, cv and test sets
from sklearn import cross_validation
train, intermediate_set = cross_validation.train_test_split(input_set, train_size=0.6, test_size=0.4)
cv, test = cross_validation.train_test_split(intermediate_set, train_size=0.5, test_size=0.5)
# preparing the training dataset
print 'training shape(Tuple of array dimensions) = ', train.shape
print 'training dimension(Number of array dimensions) = ', train.ndim
print 'cv shape(Tuple of array dimensions) = ', cv.shape
print 'cv dimension(Number of array dimensions) = ', cv.ndim
print 'test shape(Tuple of array dimensions) = ', test.shape
print 'test dimension(Number of array dimensions) = ', test.ndim
,并得到了我的
training shape(Tuple of array dimensions) = (25200, 785)
training dimension(Number of array dimensions) = 2
cv shape(Tuple of array dimensions) = (8400, 785)
cv dimension(Number of array dimensions) = 2
test shape(Tuple of array dimensions) = (8400, 785)
test dimension(Number of array dimensions) = 2
features shape = (25200, 784)
labels shape = (25200,)
怎样的结果我可以用一个命令做这个工作吗?
你不能用当前的scikit-learn在一行中做到这一点,所以你的方式是目前最好的选择。随意提交补丁。 –
我真的好奇你为什么需要这样的分割?在数据挖掘中,通常的做法是进行交叉验证或者将输入数据分成测试/训练数据。这两个问题通常不会结合在一起。你将如何使用这些数据来训练你的分类器? – Nejc