3

我使用的是标准的(字符串索引+一个热编码器+随机森林)火花管道得到的列名,如下图所示pyspark随机森林功能的重要性:如何从列编号

labelIndexer = StringIndexer(inputCol = class_label_name, outputCol="indexedLabel").fit(data) 

string_feature_indexers = [ 
    StringIndexer(inputCol=x, outputCol="int_{0}".format(x)).fit(data) 
    for x in char_col_toUse_names 
] 

onehot_encoder = [ 
    OneHotEncoder(inputCol="int_"+x, outputCol="onehot_{0}".format(x)) 
    for x in char_col_toUse_names 
] 
all_columns = num_col_toUse_names + bool_col_toUse_names + ["onehot_"+x for x in char_col_toUse_names] 
assembler = VectorAssembler(inputCols=[col for col in all_columns], outputCol="features") 
rf = RandomForestClassifier(labelCol="indexedLabel", featuresCol="features", numTrees=100) 
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=labelIndexer.labels) 
pipeline = Pipeline(stages=[labelIndexer] + string_feature_indexers + onehot_encoder + [assembler, rf, labelConverter]) 

crossval = CrossValidator(estimator=pipeline, 
          estimatorParamMaps=paramGrid, 
          evaluator=evaluator, 
          numFolds=3) 
cvModel = crossval.fit(trainingData) 

现在经过适合我可以使用cvModel.bestModel.stages[-2].featureImportances获得随机森林和特征重要性,但是这不会给我功能/列名称,而只是功能号码。

下面我得到的是:

print(cvModel.bestModel.stages[-2].featureImportances) 

(1446,[3,4,9,18,20,103,766,981,983,1098,1121,1134,1148,1227,1288,1345,1436,1444],[0.109898803421,0.0967396441648,4.24568235244e-05,0.0369705839109,0.0163489685127,3.2286694534e-06,0.0208192703688,0.0815822887175,0.0466903663708,0.0227619959989,0.0850922269211,0.000113388896956,0.0924779490403,0.163835022713,0.118987129392,0.107373548367,3.35577640585e-05,0.000229569946193]) 

我该如何映射回一些列名或列名+值格式?
基本上可以获得随机森林和列名的重要性。

回答

1

嘿,你为什么不直接通过列表扩展将其映射回原始列。这里有一个例子:

# in your case: trainingData.columns 
data_frame_columns = ["A", "B", "C", "D", "E", "F"] 
# in your case: print(cvModel.bestModel.stages[-2].featureImportances) 
feature_importance = (1, [1, 3, 5], [0.5, 0.5, 0.5]) 

rf_output = [(data_frame_columns[i], feature_importance[2][j]) for i, j in zip(feature_importance[1], range(len(feature_importance[2])))] 
dict(rf_output) 

{'B': 0.5, 'D': 0.5, 'F': 0.5} 
+1

是的,但是您错过了在stringindexer/onehotencoder之后列名更改的点。由汇编程序组合的那个,我想映射到它们。我肯定可以做到这一点,但我更关心spark(ml)是否有一些更短的方式,像scikit学习一样:) – Abhishek

+1

啊好吧我的坏。但是你们长远来说还是有效的。我不认为目前存在短期解决方案。 Spark ML API不像scikit学习的那样强大和冗长。 –

+0

是的,我知道:),只是想保持这个问题的建议:)。谢谢Dat – Abhishek

0

我无法找到任何方式ML算法后回去取列的真实初步名单,我用这作为当前的解决方法。

print(len(cols_now)) 

FEATURE_COLS=[] 

for x in cols_now: 

    if(x[-6:]!="catVar"): 

     FEATURE_COLS+=[x] 

    else: 

     temp=trainingData.select([x[:-7],x[:-6]+"tmp"]).distinct().sort(x[:-6]+"tmp") 

     temp_list=temp.select(x[:-7]).collect() 

     FEATURE_COLS+=[list(x)[0] for x in temp_list] 



print(len(FEATURE_COLS)) 

print(FEATURE_COLS) 

我始终保持着一贯的后缀在所有的索引命名(_TMP)&编码器(_catVar),如:

column_vec_in = str_col 

column_vec_out = [col+"_catVar" for col in str_col] 



indexers = [StringIndexer(inputCol=x, outputCol=x+'_tmp') 

      for x in column_vec_in ] 


encoders = [OneHotEncoder(dropLast=False, inputCol=x+"_tmp", outputCol=y) 

for x,y in zip(column_vec_in, column_vec_out)] 



tmp = [[i,j] for i,j in zip(indexers, encoders)] 

tmp = [i for sublist in tmp for i in sublist] 

这可以进一步改进和推广,但目前这个繁琐的工作围绕作品最好