2016-09-22 63 views
3

我有一个场景,其中有新的主题正在测试一系列结果都是字符串分类值的特征。一旦测试完成,我需要将新数据集与所有主题的主数据集进行比较,并查找给定阈值保持(例如90%)的相似性(匹配)。熊猫:将列与数据框的所有其他列进行比较

因此,我需要能够做的柱状(主题明智)在新的数据在主数据集设置为每列的新课题,加上其他的新数据集的每一个的比较因为生产数据集有大约50万列(和增长)和10,000行,所以可能获得最佳性能。

下面是一些示例代码:

master = pd.DataFrame({'Characteristic':['C1', 'C2', 'C3'], 
            'S1':['AA','BB','AB'], 
            'S2':['AB','-','BB'], 
            'S3':['AA','AB','--']}) 
new = pd.DataFrame({'Characteristic':['C1', 'C2', 'C3'], 
           'S4':['AA','BB','AA'], 
           'S5':['AB','-','BB']}) 
new_master = pd.merge(master, new, on='Characteristic', how='inner') 

def doComparison(comparison_df, new_columns, master_columns): 
    summary_dict = {} 
    row_cnt = comparison_df.shape[0] 

    for new_col_idx, new_col in enumerate(new_columns): 
     # don't compare the Characteristic column 
     if new_col != 'Characteristic': 
     print 'Evalating subject ' + new_col + ' for matches' 
     summary_dict[new_col] = [] 
     new_data = comparison_df.ix[:, new_col] 
     for master_col_idx, master_col in enumerate(master_columns): 
      # don't compare same subject or Characteristic column 
      if new_col != master_col and master_col != 'Characteristic': 
       master_data = comparison_df.ix[:, master_col] 
       is_same = (new_data == master_data) & (new_data != '--') & (master_data != '--') 
       pct_same = sum(is_same) * 100/row_cnt 
       if pct_same > 90: 
        print ' Found potential match ' + master_col + ' ' + str(pct_same) + ' pct' 
        summary_dict[new_col].append({'match' : master_col, 'pct' : pct_same}) 
    return summary_dict 

result = doComparison(new_master, new.columns, master.columns) 

这样的作品,但我想提高效率和性能,不知道到底怎么样。

回答

0

考虑进行以下调整,该调整运行列表理解来构建两个数据帧列名的所有组合,然后迭代到> 90%阈值匹配。

# LIST COMPREHENSION (TUPLE PAIRS) LEAVES OUT CHARACTERISTIC (FIRST COL) AND SAME NAMED COLS 
columnpairs = [(i,j) for i in new.columns[1:] for j in master.columns[1:] if i != j] 

# DICTIONARY COMPREHENSION TO INITIALIZE DICT OBJ 
summary_dict = {col:[] for col in new.columns[1:]} 

for p in columnpairs: 
    i, j = p 

    is_same = (new['Characteristic'] == master['Characteristic']) & \ 
       (new[i] == master[j]) & (new[i] != '--') & (master[j] != '--') 
    pct_same = sum(is_same) * 100/len(master) 

    if pct_same > 90:   
     summary_dict[i].append({'match' : j, 'pct': pct_same}) 

print(summary_dict) 
# {'S4': [], 'S5': [{'match': 'S2', 'pct': 100.0}]} 
1

另一种选择

import numpy as np 
import pandas as pd 
from sklearn.utils.extmath import cartesian 

利用sklearn的笛卡尔功能

col_combos = cartesian([ new.columns[1:], master.columns[1:]]) 
print (col_combos) 

[['S4' 'S1'] 
['S4' 'S2'] 
['S4' 'S3'] 
['S5' 'S1'] 
['S5' 'S2'] 
['S5' 'S3']] 

创建与特色,除了在新的每一列的一个关键的字典。 注意,这看起来像是浪费空间。也许只是用火柴保存那些?

summary_dict = {c:[] for c in new.columns[1:]} #copied from @Parfait's answer 

熊猫/ Numpy可以很容易地比较两个系列。
示例;

print (new_master['S4'] == new_master['S1']) 

0  True 
1  True 
2 False 
dtype: bool 

现在我们反复通系列的连击和计数与numpy的的count_nonzero的帮助下Trues()。其余类似于你有什么

for combo in col_combos: 
    match_count = np.count_nonzero(new_master[combo[0]] == new_master[combo[1]]) 
    pct_same = match_count * 100/len(new_master) 
    if pct_same > 90: 
     summary_dict[combo[0]].append({'match' : combo[1], 'pct': match_count/len(new_master)}) 

print (summary_dict) 

{'S4': [], 'S5': [{'pct': 1.0, 'match': 'S2'}]} 

我很想知道它是如何执行的。祝你好运!

相关问题