2012-08-25 68 views
60

我下面的教程是可利用的在Part 1 & Part 2遗憾的是作者没有时间,其中包括使用余弦实际查找两个文件之间的相似性的最后一节。我通过以下链接从stackoverflow获得了文章中的例子。我已经包含了上述链接中提到的代码,只是为了让答案生活变得简单。的Python:TF-IDF余弦:找文档相似

from sklearn.feature_extraction.text import CountVectorizer 
from sklearn.feature_extraction.text import TfidfTransformer 
from nltk.corpus import stopwords 
import numpy as np 
import numpy.linalg as LA 

train_set = ["The sky is blue.", "The sun is bright."] #Documents 
test_set = ["The sun in the sky is bright."] #Query 
stopWords = stopwords.words('english') 

vectorizer = CountVectorizer(stop_words = stopWords) 
#print vectorizer 
transformer = TfidfTransformer() 
#print transformer 

trainVectorizerArray = vectorizer.fit_transform(train_set).toarray() 
testVectorizerArray = vectorizer.transform(test_set).toarray() 
print 'Fit Vectorizer to train set', trainVectorizerArray 
print 'Transform Vectorizer to test set', testVectorizerArray 

transformer.fit(trainVectorizerArray) 
print 
print transformer.transform(trainVectorizerArray).toarray() 

transformer.fit(testVectorizerArray) 
print 
tfidf = transformer.transform(testVectorizerArray) 
print tfidf.todense() 

如上面代码的结果我有以下矩阵

Fit Vectorizer to train set [[1 0 1 0] 
[0 1 0 1]] 
Transform Vectorizer to test set [[0 1 1 1]] 

[[ 0.70710678 0.   0.70710678 0.  ] 
[ 0.   0.70710678 0.   0.70710678]] 

[[ 0.   0.57735027 0.57735027 0.57735027]] 

我不知道如何使用此输出来计算余弦相似,我知道如何实现余弦相似性对于两个向量与类似的长度,但在这里我不知道如何确定这两个向量。

+3

对于trainVectorizerArray中的每个向量,您必须在testVectorizerArray中找到与向量的余弦相似度。 – excray

+0

@excray非常感谢,我帮你解决了这个问题,我应该怎么做? –

+0

@excray但我确实有一个小问题,执行tf * idf计算对此没有用处,因为我没有使用矩阵中显示的最终结果。 –

回答

13

在@ excray的评论的帮助下,我设法找出答案,我们需要做的是实际编写一个简单的for循环来迭代代表列车数据和测试数据的两个数组。

首先实现一个简单的lambda函数保持式余弦计算:

cosine_function = lambda a, b : round(np.inner(a, b)/(LA.norm(a)*LA.norm(b)), 3) 

然后只编写一个简单的for循环在矢量迭代,逻辑为每个“对于trainVectorizerArray每个载体,你必须在testVectorizerArray中找到与向量的余弦相似度。“

from sklearn.feature_extraction.text import CountVectorizer 
from sklearn.feature_extraction.text import TfidfTransformer 
from nltk.corpus import stopwords 
import numpy as np 
import numpy.linalg as LA 

train_set = ["The sky is blue.", "The sun is bright."] #Documents 
test_set = ["The sun in the sky is bright."] #Query 
stopWords = stopwords.words('english') 

vectorizer = CountVectorizer(stop_words = stopWords) 
#print vectorizer 
transformer = TfidfTransformer() 
#print transformer 

trainVectorizerArray = vectorizer.fit_transform(train_set).toarray() 
testVectorizerArray = vectorizer.transform(test_set).toarray() 
print 'Fit Vectorizer to train set', trainVectorizerArray 
print 'Transform Vectorizer to test set', testVectorizerArray 
cx = lambda a, b : round(np.inner(a, b)/(LA.norm(a)*LA.norm(b)), 3) 

for vector in trainVectorizerArray: 
    print vector 
    for testV in testVectorizerArray: 
     print testV 
     cosine = cx(vector, testV) 
     print cosine 

transformer.fit(trainVectorizerArray) 
print 
print transformer.transform(trainVectorizerArray).toarray() 

transformer.fit(testVectorizerArray) 
print 
tfidf = transformer.transform(testVectorizerArray) 
print tfidf.todense() 

这里是输出:

Fit Vectorizer to train set [[1 0 1 0] 
[0 1 0 1]] 
Transform Vectorizer to test set [[0 1 1 1]] 
[1 0 1 0] 
[0 1 1 1] 
0.408 
[0 1 0 1] 
[0 1 1 1] 
0.816 

[[ 0.70710678 0.   0.70710678 0.  ] 
[ 0.   0.70710678 0.   0.70710678]] 

[[ 0.   0.57735027 0.57735027 0.57735027]] 
+1

不错..我也从一开始就学习,你的问题和答案是最容易遵循的。我认为你可以使用np.corrcoef()来代替你自己的方法。 – wbg

+1

然后再次,你的方法允许任何标准...这是很酷的... – wbg

+1

@ spicyramen将其舍入到小数点后3位 –

113

首先,如果你想提取计数功能和应用TF-IDF正常化和逐行欧几里德正常化你可以用TfidfVectorizer做在一个操作:

>>> from sklearn.feature_extraction.text import TfidfVectorizer 
>>> from sklearn.datasets import fetch_20newsgroups 
>>> twenty = fetch_20newsgroups() 

>>> tfidf = TfidfVectorizer().fit_transform(twenty.data) 
>>> tfidf 
<11314x130088 sparse matrix of type '<type 'numpy.float64'>' 
    with 1787553 stored elements in Compressed Sparse Row format> 

我们发现一个文档的余弦距离(例如,在数据集中的第一个)和所有其他的,你只需要计算f的点积由于tfidf向量已经行标准化,因此首先与其他所有向量一起构成向量。 scipy稀疏矩阵API有点奇怪(不像密集的N维numpy阵列那么灵活)。为了得到第一个矢量,你需要切片矩阵逐行获取与单行子矩阵:

>>> tfidf[0:1] 
<1x130088 sparse matrix of type '<type 'numpy.float64'>' 
    with 89 stored elements in Compressed Sparse Row format> 

scikit学习已经提供成对的指标(又名核机器学习的说法),其工作既密集和矢量集合的稀疏表示。在这种情况下,我们需要一个点的产品,也被称为线性内核:

>>> from sklearn.metrics.pairwise import linear_kernel 
>>> cosine_similarities = linear_kernel(tfidf[0:1], tfidf).flatten() 
>>> cosine_similarities 
array([ 1.  , 0.04405952, 0.11016969, ..., 0.04433602, 
    0.04457106, 0.03293218]) 

因此找到了前5名相关文件,我们可以使用argsort和一些负面阵列切片(最相关的文档具有最高余弦相似值,因此,在排序索引数组末尾):

>>> related_docs_indices = cosine_similarities.argsort()[:-5:-1] 
>>> related_docs_indices 
array([ 0, 958, 10576, 3277]) 
>>> cosine_similarities[related_docs_indices] 
array([ 1.  , 0.54967926, 0.32902194, 0.2825788 ]) 

的第一个结果是健全性检查:我们发现查询文档与一个余弦相似性得分的1,其具有以下文本中的最类似的文件:

>>> print twenty.data[0] 
From: [email protected] (where's my thing) 
Subject: WHAT car is this!? 
Nntp-Posting-Host: rac3.wam.umd.edu 
Organization: University of Maryland, College Park 
Lines: 15 

I was wondering if anyone out there could enlighten me on this car I saw 
the other day. It was a 2-door sports car, looked to be from the late 60s/ 
early 70s. It was called a Bricklin. The doors were really small. In addition, 
the front bumper was separate from the rest of the body. This is 
all I know. If anyone can tellme a model name, engine specs, years 
of production, where this car is made, history, or whatever info you 
have on this funky looking car, please e-mail. 

Thanks, 
- IL 
    ---- brought to you by your neighborhood Lerxst ---- 

第二个最相似的文档是引用原邮件,因此有许多常用词的答复:

>>> print twenty.data[958] 
From: [email protected] (Robert Seymour) 
Subject: Re: WHAT car is this!? 
Article-I.D.: reed.1993Apr21.032905.29286 
Reply-To: [email protected] 
Organization: Reed College, Portland, OR 
Lines: 26 

In article <[email protected]> [email protected] (where's my 
thing) writes: 
> 
> I was wondering if anyone out there could enlighten me on this car I saw 
> the other day. It was a 2-door sports car, looked to be from the late 60s/ 
> early 70s. It was called a Bricklin. The doors were really small. In 
addition, 
> the front bumper was separate from the rest of the body. This is 
> all I know. If anyone can tellme a model name, engine specs, years 
> of production, where this car is made, history, or whatever info you 
> have on this funky looking car, please e-mail. 

Bricklins were manufactured in the 70s with engines from Ford. They are rather 
odd looking with the encased front bumper. There aren't a lot of them around, 
but Hemmings (Motor News) ususally has ten or so listed. Basically, they are a 
performance Ford with new styling slapped on top. 

> ---- brought to you by your neighborhood Lerxst ---- 

Rush fan? 

-- 
Robert Seymour    [email protected] 
Physics and Philosophy, Reed College (NeXTmail accepted) 
Artificial Life Project   Reed College 
Reed Solar Energy Project (SolTrain) Portland, OR 
+3

优秀的答案!感谢olivier! –

+0

后续问题:如果我有大量文档,步骤2中的linear_kernel函数可能是性能瓶颈,因为它与行数成线性关系。任何想法如何将其降至次线? – Shuo

+0

您可以使用弹性搜索和Solr的“更像这样”查询,该查询应使用子线性可伸缩性配置文件生成近似答案。 – ogrisel

15

我知道它的旧帖子。但我尝试了http://scikit-learn.sourceforge.net/stable/包。这里是我的代码来寻找余弦相似度。问题是,你将如何计算余弦相似度这个包,这是我为

from sklearn.feature_extraction.text import CountVectorizer 
from sklearn.metrics.pairwise import cosine_similarity 
from sklearn.feature_extraction.text import TfidfVectorizer 

f = open("/root/Myfolder/scoringDocuments/doc1") 
doc1 = str.decode(f.read(), "UTF-8", "ignore") 
f = open("/root/Myfolder/scoringDocuments/doc2") 
doc2 = str.decode(f.read(), "UTF-8", "ignore") 
f = open("/root/Myfolder/scoringDocuments/doc3") 
doc3 = str.decode(f.read(), "UTF-8", "ignore") 

train_set = ["president of India",doc1, doc2, doc3] 

tfidf_vectorizer = TfidfVectorizer() 
tfidf_matrix_train = tfidf_vectorizer.fit_transform(train_set) #finds the tfidf score with normalization 
print "cosine scores ==> ",cosine_similarity(tfidf_matrix_train[0:1], tfidf_matrix_train) #here the first element of tfidf_matrix_train is matched with other three elements 

代码这里假设查询train_set和DOC1,DOC2和doc3的第一个元素是我要的文件在余弦相似度的帮助下排名。那么我可以使用这个代码。

此外,问题中提供的教程非常有用。这里是所有的部分它 part-Ipart-IIpart-III

输出将是如下:

[[ 1.   0.07102631 0.02731343 0.06348799]] 

这里1表示查询与自身匹配,其他三个是匹配分数查询各自的文件。

+1

cosine_similarity(tfidf_matrix_train [0:1],tfidf_matrix_train)如果1被更改为超过数千,该怎么办?我们如何处理? – ashim888

8

这应该对你有帮助。

from sklearn.feature_extraction.text import TfidfVectorizer 
from sklearn.metrics.pairwise import cosine_similarity 

tfidf_vectorizer = TfidfVectorizer() 
tfidf_matrix = tfidf_vectorizer.fit_transform(train_set) 
print tfidf_matrix 
cosine = cosine_similarity(tfidf_matrix[length-1], tfidf_matrix) 
print cosine 

输出将是:

[[ 0.34949812 0.81649658 1.  ]] 
+3

你如何获得长度? – spicyramen

12

让我给你我写另一个教程。它回答你的问题,但也解释了为什么我们正在做一些事情。我也试图使其简洁。

所以你有一个list_of_documents这只是一个字符串数组,另一个document这只是一个字符串。您需要从list_of_documents中找到与document最相似的文档。

让我们结合在一起:documents = list_of_documents + [document]

让我们开始依赖。这将变得清楚为什么我们使用它们中的每一个。

from nltk.corpus import stopwords 
import string 
from nltk.tokenize import wordpunct_tokenize as tokenize 
from nltk.stem.porter import PorterStemmer 
from sklearn.feature_extraction.text import TfidfVectorizer 
from scipy.spatial.distance import cosine 

一个可以被使用的方法是一种bag-of-words的做法,我们在独立于其它的文档对待每一个单词,只是把所有的人都一起在大包。从一个角度来看,它丢失了很多信息(如单词如何连接),但从另一个角度来看,它使模型变得简单。

在英语和其他任何人类语言中,有很多“无用的”单词,如'a','the','in',它们很常见,因此它们不具有很多含义。他们被称为stop words,它是一个好主意,删除它们。人们可以注意到的另一件事是,像“分析”,“分析器”,“分析”这样的词语非常相似。他们有一个共同的根,所有可以转换为只有一个字。这个过程被称为stemming,并存在不同的速度,攻击性等不同的词干。所以我们将每个文档转换为没有停用词的词干列表。我们也丢弃所有的标点符号。

porter = PorterStemmer() 
stop_words = set(stopwords.words('english')) 

modified_arr = [[porter.stem(i.lower()) for i in tokenize(d.translate(None, string.punctuation)) if i.lower() not in stop_words] for d in documents] 

那么这一句话怎么帮助我们呢?想象一下,我们有3个行李:[a, b, c],[a, c, a][b, c, d]。我们可以将它们转换为vectors in the basis[a, b, c, d]。所以我们最终得到了载体:[1, 1, 1, 0],[2, 0, 1, 0][0, 1, 1, 1]。类似的事情与我们的文件(只有载体会更长)。现在我们看到我们删除了很多单词,并阻止了其他单词以减少矢量的尺寸。这里只是有趣的观察。更长的文档将有更多的积极因素比更短,这就是为什么正常化矢量是很好的。这被称为术语频率TF,人们还使用关于在其他文档中多久使用该单词的附加信息 - 逆文档频率IDF。我们一起有一个度量标准TF-IDF which have a couple of flavors。这可以用在sklearn这样一行移除停止字和lowercasing实现:-)

modified_doc = [' '.join(i) for i in modified_arr] # this is only to convert our list of lists to list of strings that vectorizer uses. 
tf_idf = TfidfVectorizer().fit_transform(modified_doc) 

其实向量化allows to do a lot of things。我只是在一个单独的步骤中完成了它们,因为sklearn没有非英语停用词,但是nltk已经。

所以我们有所有的向量计算。最后一步是找出哪一个与最后一个最相似。有很多种方法可以达到这个目的,其中之一就是欧几里德距离,因为discussed here这个原因不是很好。另一种方法是cosine similarity。我们遍历所有的文件和文档之间的计算余弦相似,最后一个:

l = len(documents) - 1 
for i in xrange(l): 
    minimum = (1, None) 
    minimum = min((cosine(tf_idf[i].todense(), tf_idf[l + 1].todense()), i), minimum) 
print minimum 

现在最少将有大约最好的文档及其分数信息。

+2

签名,这不是什么op要求:搜索最佳文档给予查询不是“最好的文档”在语料库中。请不要这样做,像我这样的人会浪费时间试图用你的例子来完成任务,并被拖入矩阵调整疯狂。 – minerals

+0

它有什么不同?这个想法是完全一样的。提取特征,计算查询和文档之间的余弦距离。 –

+0

您正在相同形状的矩阵上进行计算,尝试一个不同的示例,其中您有一个查询矩阵,它具有不同的大小,op的火车集和测试集。我无法修改你的代码,所以它可以工作。 – minerals