2013-04-09 34 views
11

我想创建一个有NLTK和熊猫的期限文档矩阵。 我写了下面的功能:有效期限文档矩阵与NLTK

def fnDTM_Corpus(xCorpus): 
    import pandas as pd 
    '''to create a Term Document Matrix from a NLTK Corpus''' 
    fd_list = [] 
    for x in range(0, len(xCorpus.fileids())): 
     fd_list.append(nltk.FreqDist(xCorpus.words(xCorpus.fileids()[x]))) 
    DTM = pd.DataFrame(fd_list, index = xCorpus.fileids()) 
    DTM.fillna(0,inplace = True) 
    return DTM.T 

运行它

import nltk 
from nltk.corpus import PlaintextCorpusReader 
corpus_root = 'C:/Data/' 

newcorpus = PlaintextCorpusReader(corpus_root, '.*') 

x = fnDTM_Corpus(newcorpus) 

它非常适用于语料库一些小文件,但给了我一个的MemoryError当我尝试用语料库运行4,000个文件(每个大约2kb)。

我错过了什么吗?

我正在使用一个32位的Python。 (上午在Windows 7,64位操作系统,核心四核CPU,8 GB RAM)。我真的需要使用64位的这种大小的语料库吗?

+1

您是否尝试过'gensim'或类似的库,已经优化了他们的tf-idf代码? http://radimrehurek.com/gensim/ – alvas 2013-04-09 14:43:07

+0

4000个文件是一个很小的语料库。您需要[稀疏](https://en.wikipedia.org/wiki/Sparse_matrix)表示法。熊猫有Gensim和scikit学习的。 – 2013-04-09 15:03:53

+0

我以为'pd.get_dummies(df_column)'可以完成这项工作。也许我错过了关于文档术语矩阵 – 2015-11-06 05:00:04

回答

19

感谢Radim和Larsmans。 我的目标是拥有一个类似于你在R tm中获得的DTM。 我决定使用scikit-learn,部分灵感来源于this blog entry。这是我提出的代码。

我在这里发布它,希望别人会发现它有用。

import pandas as pd 
from sklearn.feature_extraction.text import CountVectorizer 

def fn_tdm_df(docs, xColNames = None, **kwargs): 
    ''' create a term document matrix as pandas DataFrame 
    with **kwargs you can pass arguments of CountVectorizer 
    if xColNames is given the dataframe gets columns Names''' 

    #initialize the vectorizer 
    vectorizer = CountVectorizer(**kwargs) 
    x1 = vectorizer.fit_transform(docs) 
    #create dataFrame 
    df = pd.DataFrame(x1.toarray().transpose(), index = vectorizer.get_feature_names()) 
    if xColNames is not None: 
     df.columns = xColNames 

    return df 

使用它的文本列表上的目录中

DIR = 'C:/Data/' 

def fn_CorpusFromDIR(xDIR): 
    ''' functions to create corpus from a Directories 
    Input: Directory 
    Output: A dictionary with 
      Names of files ['ColNames'] 
      the text in corpus ['docs']''' 
    import os 
    Res = dict(docs = [open(os.path.join(xDIR,f)).read() for f in os.listdir(xDIR)], 
       ColNames = map(lambda x: 'P_' + x[0:6], os.listdir(xDIR))) 
    return Res 

创建数据框

d1 = fn_tdm_df(docs = fn_CorpusFromDIR(DIR)['docs'], 
      xColNames = fn_CorpusFromDIR(DIR)['ColNames'], 
      stop_words=None, charset_error = 'replace') 
22

我知道OP想创造在NLTK一个TDM,但textmining包(pip install textmining)使其变得简单:

import textmining 

def termdocumentmatrix_example(): 
    # Create some very short sample documents 
    doc1 = 'John and Bob are brothers.' 
    doc2 = 'John went to the store. The store was closed.' 
    doc3 = 'Bob went to the store too.' 
    # Initialize class to create term-document matrix 
    tdm = textmining.TermDocumentMatrix() 
    # Add the documents 
    tdm.add_doc(doc1) 
    tdm.add_doc(doc2) 
    tdm.add_doc(doc3) 
    # Write out the matrix to a csv file. Note that setting cutoff=1 means 
    # that words which appear in 1 or more documents will be included in 
    # the output (i.e. every word will appear in the output). The default 
    # for cutoff is 2, since we usually aren't interested in words which 
    # appear in a single document. For this example we want to see all 
    # words however, hence cutoff=1. 
    tdm.write_csv('matrix.csv', cutoff=1) 
    # Instead of writing out the matrix you can also access its rows directly. 
    # Let's print them to the screen. 
    for row in tdm.rows(cutoff=1): 
      print row 

termdocumentmatrix_example() 

输出:

['and', 'the', 'brothers', 'to', 'are', 'closed', 'bob', 'john', 'was', 'went', 'store', 'too'] 
[1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0] 
[0, 2, 0, 1, 0, 1, 0, 1, 1, 1, 2, 0] 
[0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1] 

可替代地,人们可以使用熊猫和sklearn [source]

import pandas as pd 
from sklearn.feature_extraction.text import CountVectorizer 

docs = ['why hello there', 'omg hello pony', 'she went there? omg'] 
vec = CountVectorizer() 
X = vec.fit_transform(docs) 
df = pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) 
print(df) 

输出:

hello omg pony she there went why 
0  1 0  0 0  1  0 1 
1  1 1  1 0  0  0 0 
2  0 1  0 1  1  1 0 
+1

我在运行代码时出错: import stemmer ImportError:没有名为'stemmer'的模块 我该如何解决它?我已经尝试过pip install stemmer。 – 2017-02-08 10:03:12

+0

你在使用什么版本的Python?有可能在textmining包中引入了一个引导程序模块导入。我刚刚运行'pip install textmining',然后运行2.7.9上面的代码,并获得了预期的输出结果。 – duhaime 2017-02-08 12:34:44

+0

我使用python 3.5,anaconda,windows 10.我运行'pip install textmining'。我复制并运行代码。 – 2017-02-08 13:33:44

0

使用令牌和数据帧的替代方法

import nltk 
comment #nltk.download() to get toenize 
from urllib import request 
url = "http://www.gutenberg.org/files/2554/2554-0.txt" 
response = request.urlopen(url) 
raw = response.read().decode('utf8') 
type(raw) 

tokens = nltk.word_tokenize(raw) 
type(tokens) 

tokens[1:10] 
['Project', 
'Gutenberg', 
'EBook', 
'of', 
'Crime', 
'and', 
'Punishment', 
',', 
'by'] 

tokens2=pd.DataFrame(tokens) 
tokens2.columns=['Words'] 
tokens2.head() 


Words 
0 The 
1 Project 
2 Gutenberg 
3 EBook 
4 of 

    tokens2.Words.value_counts().head() 
,     16178 
.     9589 
the    7436 
and    6284 
to     5278