2017-10-05 275 views
0

我不确定这是否是一个难题。我有一个功能,可以说将参数传递给函数,该参数必须作为参数传递给python中的另一个函数

def load(es , fn_set_your_data , input_file, **kwargs): 
    success, some_ = bulk(es, fn_set_your_data(input_file, **kwargs)) 


def fn_index_data(index_name , doc_type , input_file , fn_set_your_data , mapping = False , force = False): 

    es = Elasticsearch() 
    if es.indices.exists(index= index_name): 

     return "Index Already exists" 

    else: 

     if mapping: 
      es.indices.create(index=index_name, body=mapping, ignore=400) 

      print "Mapping is done" 

      load(es , fn_set_your_data , input_file , index_name = index_name , doc_type_name = doc_type) 

现在有另一个函数接受这个函数作为参数,可以说global_fn。我需要将local_fn作为参数传递给global_fn,param split_value每次都在循环中更改。例如:

def set_your_data(input_file, index_name , doc_type_name , split_value = 1): 

    global global_count 
    for skill_ , items_ in input_file.iteritems(): 

     main_item = items_['main_item'].strip() 
     main_item_split = main_item.split() 
     if len(main_item_split) == split_value : 

      query = {'item' : main_item} 

      yield { 
       "_index": index_name, 
       "_type": doc_type_name, 
       "_id": global_count, 
       "_source": query 
      } 
     else: 
      continue 


if __name__ == "__main__": 

    index_name_list = ['percolate_bigram' , 'percolate_ngram' , 'percolate_bigram'] 
    doc_type = 'alert' 


    for idx, index_name in enumerate(index_name_list): 
     split_value = idx 
     fn_index_data(index_name = index_name , doc_type = doc_type , input_file = input_data , fn_set_your_data = set_your_data , mapping = mapping) 

#####我是怎么过split_value到set_your_data(local_fn),然后通过这fn_index_data(global_fn)。希望这段代码能够给出一个很好和合理的上下文。

它是可行的,与**kwargs什么? 无论评论有用。

+0

完全取决于'global_fn'。它是什么样子的?它是否接受任何其他可以传递的论据? –

+0

你的'global_fn'是否接受参数?如果是,则需要从'local_fn'返回值 –

+0

查看'functools.partial'可能是值得的,但您的描述过于模糊,无法确定。 –

回答

1
def set_your_data(split_value=1): 
    def set_your_data_inner(input_file, index_name , doc_type_name): 
     global global_count 
     for skill_ , items_ in input_file.iteritems(): 

      main_item = items_['main_item'].strip() 
      main_item_split = main_item.split() 
      if len(main_item_split) == split_value : 

       query = {'item' : main_item} 

       yield { 
        "_index": index_name, 
        "_type": doc_type_name, 
        "_id": global_count, 
        "_source": query 
       } 
      else: 
       continue 
    return set_your_data_inner 

if __name__ == "__main__": 

    index_name_list = ['percolate_bigram' , 'percolate_ngram' , 'percolate_bigram'] 
    doc_type = 'alert' 


    for idx, index_name in enumerate(index_name_list): 
     fn_index_data(index_name = index_name , doc_type = doc_type , input_file = input_data , fn_set_your_data = set_your_data(idx) , mapping = mapping) 
相关问题