2017-02-21 32 views
0

我试图从大语料库中找到k个最常见的n元组。我已经看到很多地方提出了天真的方法 - 简单地扫描整个语料库并保存所有n元数的字典。有一个更好的方法吗?有没有更有效的方法来找到最常见的n-gram?

+0

可能的重复http://cs.stackexchange.com/questions/8972/optimal-algorithm-for-finding-all-ngrams-from-a-pre-defined-set-in-a-text – pltrdy

+0

什么你在比较?语料库有多大?我认为你可以很容易地用C++来计算ngram的数量,而不用很快的计算一个庞大的语料库,甚至在Python中它也相当快=) – alvas

+0

你的意思是character ngrams或者word ngrams? – alvas

回答

0

在Python,使用NLTK:

$ wget http://norvig.com/big.txt 
$ python 
>>> from collections import Counter 
>>> from nltk import ngrams 
>>> bigtxt = open('big.txt').read() 
>>> ngram_counts = Counter(ngrams(bigtxt.split(), 2)) 
>>> ngram_counts.most_common(10) 
[(('of', 'the'), 12422), (('in', 'the'), 5741), (('to', 'the'), 4333), (('and', 'the'), 3065), (('on', 'the'), 2214), (('at', 'the'), 1915), (('by', 'the'), 1863), (('from', 'the'), 1754), (('of', 'a'), 1700), (('with', 'the'), 1656)] 

在Python,天然的(见Fast/Optimize N-gram implementations in python):

>>> def ngrams(text, n=2): 
...  return zip(*[text[i:] for i in range(n)]) 
>>> ngram_counts = Counter(ngrams(bigtxt.split(), 2)) 
>>> ngram_counts.most_common(10) 
    [(('of', 'the'), 12422), (('in', 'the'), 5741), (('to', 'the'), 4333), (('and', 'the'), 3065), (('on', 'the'), 2214), (('at', 'the'), 1915), (('by', 'the'), 1863), (('from', 'the'), 1754), (('of', 'a'), 1700), (('with', 'the'), 1656)] 

在利亚,见Generate ngrams with Julia

import StatsBase: countmap 
import Iterators: partition 
bigtxt = readstring(open("big.txt")) 
ngram_counts = countmap(collect(partition(split(bigtxt), 2, 1))) 

粗定时:

$ time python ngram-test.py # With NLTK. 

real 0m3.166s 
user 0m2.274s 
sys 0m0.528s 

$ time python ngram-native-test.py 

real 0m1.521s 
user 0m1.317s 
sys 0m0.145s 

$ time julia ngram-test.jl 

real 0m3.573s 
user 0m3.188s 
sys 0m0.306s 
相关问题