2017-02-10 304 views
1

如何让这个python程序更快地读取大文本文件?我的代码花费了将近五分钟的时间来阅读文本文件,但我需要它做得更快。我认为我的算法不在O(n)中。python文本文件读取速度慢

一些样品数据(actual data是470K +行):

Aarika 
Aaron 
aaron 
Aaronic 
aaronic 
Aaronical 
Aaronite 
Aaronitic 
Aaron's-beard 
Aaronsburg 
Aaronson 

我的代码:

import string 
import re 


WORDLIST_FILENAME = "words.txt" 

def load_words(): 
wordlist = [] 
print("Loading word list from file...") 
with open(WORDLIST_FILENAME, 'r') as f: 
    for line in f: 
     wordlist = wordlist + str.split(line) 
print(" ", len(wordlist), "words loaded.") 
return wordlist 

def find_words(uletters): 
wordlist = load_words() 
foundList = [] 

for word in wordlist: 
    wordl = list(word) 
    letters = list(uletters) 
    count = 0 
    if len(word)==7: 
     for letter in wordl[:]: 
      if letter in letters: 
       wordl.remove(letter) 
       # print("word left" + str(wordl)) 
       letters.remove(letter)      
       # print(letters) 
       count = count + 1 
       #print(count) 
       if count == 7: 
        print("Matched:" + word) 
        foundList = foundList + str.split(word) 
foundList.sort() 
result = '' 
for items in foundList: 
     result = result + items + ',' 
print(result[:-1]) 


#Test cases 
find_words("eabauea" "iveabdi") 
#pattern = "asa" " qlocved" 
#print("letters to look for: "+ pattern) 
#find_words(pattern) 
+1

听起来更适合http://codereview.stackexchange.com/。 – alecxe

+0

如果你也可以解释你的程序应该做什么,它会有所帮助。 – MYGz

+0

有一件事......'wordlist = wordlist + str.split(line)'复制每行的单词列表。做'wordlist.extend(line.strip()。split())'。或者,如果你想摆脱重复和更快的单词查找,请将'wordlist'设置为'set',并执行'.update'。 – tdelaney

回答

0

阅读单柱文件到一个列表中与splitlines()

def load_words(): 
    with open("words.txt", 'r') as f: 
     wordlist = f.read().splitlines() 
    return wordlist 

您可以使用timeit

from timeit import timeit 

timeit('load_words()', setup=setup, number=3) 
# Output: 0.1708553659846075 seconds 

至于如何实现的东西看起来像一个模糊匹配算法,你可以尝试 fuzzywuzzy

# pip install fuzzywuzzy[speedup] 

from fuzzywuzzy import process 

wordlist = load_words() 
process.extract("eabauea", wordlist, limit=10) 

输出:

[('-a', 90), ('A', 90), ('A.', 90), ('a', 90), ("a'", 90), 
('a-', 90), ('a.', 90), ('AB', 90), ('Ab', 90), ('ab', 90)] 

的结果是,如果更有趣你过滤更长的匹配:

results = process.extract("eabauea", wordlist, limit=100) 
[x for x in results if len(x[0]) > 4] 

输出:

[('abaue', 83), 
('Ababua', 77), 
('Abatua', 77), 
('Bauera', 77), 
('baulea', 77), 
('abattue', 71), 
('abature', 71), 
('ablaqueate', 71), 
('bauleah', 71), 
('ebauche', 71), 
('habaera', 71), 
('reabuse', 71), 
('Sabaean', 71), 
('sabaean', 71), 
('Zabaean', 71), 
('-acea', 68)] 

但随着470K +排它确实需要一段时间:

timeit('process.extract("eabauea", wordlist, limit=3)', setup=setup, number=3) 
# Output: 384.97334043699084 seconds