2016-10-04 67 views
1

第一AF一切,这里是示例数据重现,我有问题,我会尽量波纹管解释: https://drive.google.com/file/d/0B4RCdYlVF8otUll6V2x0cDJORGc/view?usp=sharing相同的值但不同的结果?关于removeSparseTerms(R)

的问题是,我得到removeSparseTerms不同的结果,尽管引入它具有相同的价值。它似乎无视人类的逻辑,或者至少是一个人的逻辑。我有这样的功能:

generateTDM <- function (Room_name, dest.train, RST){ 
      s.dir <- sprintf("%s/%s", dest.train, Room_name) 
      s.cor <- Corpus(DirSource(directory = s.dir, pattern = "txt", encoding = "UTF-8"))     #Crea unos corpora de los archivos txt ya limpios. 
      s.tdm <- TermDocumentMatrix(s.cor, control = list(bounds = list(local = c(2, Inf)), tokenize = TrigramTokenizer))      #Crea una matriz de terminos a partir de los corpora teniendo en cuenta unigramas, bigramas y trigramas. 
      s.tdm <- removeSparseTerms(s.tdm, RST)               #Mantiene aquellos términos que aparezcan en el (1-RST)% de los archivos, el resto los elimina. 
     } 

那么,当我这样调用此函数:

tdm.train <- lapply(Room_name, generateTDM, dest.train, RST[p]) 

我得到的,其中是位于取决于其它元素的向量内的可变RST功能不同的输出。也就是说,尽管价值相同,但我得到了不同的结果。

例如:

情况1:

RST <-seq (0.45, 0.6, 0.05) 
p<-4 

我将RST =(0.45,0.5,0.55,0.6),然后RST [P]是0.6。

结果在这种情况下:

> tdm.train 
     [[1]] 
    <<TermDocumentMatrix (terms: 84, documents: 51)>> 
    Non-/sparse entries: 2451/1833 
    Sparsity   : 43% 
    Maximal term length: 10 
    Weighting   : term frequency (tf) 

    [[2]] 
    <<TermDocumentMatrix (terms: 82, documents: 52)>> 
    Non-/sparse entries: 2409/1855 
    Sparsity   : 44% 
    Maximal term length: 11 
    Weighting   : term frequency (tf) 

    [[3]] 
    <<TermDocumentMatrix (terms: 68, documents: 51)>> 
    Non-/sparse entries: 1926/1542 
    Sparsity   : 44% 
    Maximal term length: 13 
    Weighting   : term frequency (tf) 

    [[4]] 
    <<TermDocumentMatrix (terms: 36, documents: 48)>> 
    Non-/sparse entries: 985/743 
    Sparsity   : 43% 
    Maximal term length: 10 
    Weighting   : term frequency (tf) 

    [[5]] 
    <<TermDocumentMatrix (terms: 48, documents: 50)>> 
    Non-/sparse entries: 1295/1105 
    Sparsity   : 46% 
    Maximal term length: 10 
    Weighting   : term frequency (tf) 

    [[6]] 
    <<TermDocumentMatrix (terms: 27, documents: 50)>> 
    Non-/sparse entries: 756/594 
    Sparsity   : 44% 
    Maximal term length: 8 
    Weighting   : term frequency (tf) 

情况2:

RST <-seq (0.45, 0.8, 0.05) 
    p<-4 

我现在有RST =(0.45,0.5,0.55,0.6,0.65,0.7%,0.75 ,0.8),ergo RST [p]与此次相同(0.6)。

那么,为什么我有不同的结果?我无法理解它。

> tdm.train 
[[1]] 
<<TermDocumentMatrix (terms: 84, documents: 51)>> 
Non-/sparse entries: 2451/1833 
Sparsity   : 43% 
Maximal term length: 10 
Weighting   : term frequency (tf) 

[[2]] 
<<TermDocumentMatrix (terms: 82, documents: 52)>> 
Non-/sparse entries: 2409/1855 
Sparsity   : 44% 
Maximal term length: 11 
Weighting   : term frequency (tf) 

[[3]] 
<<TermDocumentMatrix (terms: 68, documents: 51)>> 
Non-/sparse entries: 1926/1542 
Sparsity   : 44% 
Maximal term length: 13 
Weighting   : term frequency (tf) 

[[4]] 
<<TermDocumentMatrix (terms: 36, documents: 48)>> 
Non-/sparse entries: 985/743 
Sparsity   : 43% 
Maximal term length: 10 
Weighting   : term frequency (tf) 

[[5]] 
<<TermDocumentMatrix (terms: 57, documents: 50)>> 
Non-/sparse entries: 1475/1375 
Sparsity   : 48% 
Maximal term length: 10 
Weighting   : term frequency (tf) 

[[6]] 
<<TermDocumentMatrix (terms: 34, documents: 50)>> 
Non-/sparse entries: 896/804 
Sparsity   : 47% 
Maximal term length: 8 
Weighting   : term frequency (tf) 

我不知道......这很奇怪,对吧?如果RST的值相同,为什么最后两个dirs中removeSparseTerms的结果在每种情况下都不相同。请帮助我,不知道原因是在杀我。

非常感谢你,祝你有美好的一天。


重复的例子,基于对OP的更新:

library(tm) 
library(RWeka) 
download.file("https://docs.google.com/uc?authuser=0&id=0B4RCdYlVF8otUll6V2x0cDJORGc&export=download", tf <- tempfile(fileext = ".zip"), mode = "wb") 
unzip(tf, exdir = tempdir()) 
TrigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 1, max = 3)) 
generateTDM <- function (Room_name, dest.train, rst){ 
    s.dir <- sprintf("%s/%s", dest.train, Room_name) 
    s.cor <- Corpus(DirSource(directory = s.dir, pattern = "txt", encoding = "UTF-8"))     #Crea unos corpora de los archivos txt ya limpios. 
    s.tdm <- TermDocumentMatrix(s.cor, control = list(bounds = list(local = c(2, Inf)), tokenize = TrigramTokenizer))      #Crea una matriz de terminos a partir de los corpora teniendo en cuenta unigramas, bigramas y trigramas. 
    t <- table(s.tdm$i) > (s.tdm$ncol * (1 - rst)) # from tm::removeSparseTerms() 
    termIndex <- as.numeric(names(t[t])) 
    return(s.tdm[termIndex, ]) 
} 
dest.train <- file.path(tempdir(), "stackoverflow", "TrainDocs") 
Room_name <- "Venus" 
p <- 4 
RST1 <- seq(0.45, 0.6, 0.05) 
RST2 <- seq(0.45, 0.8, 0.05) 
RST2[p] 
# [1] 0.6 
RST1[p] 
# [1] 0.6 
identical(RST2[p], RST1[p]) 
# [1] FALSE # ?!? 

lapply(Room_name, generateTDM, dest.train, RST1[p]) 
# <<TermDocumentMatrix (terms: 48, documents: 50)>> 

lapply(Room_name, generateTDM, dest.train, RST2[p]) 
# <<TermDocumentMatrix (terms: 57, documents: 50)>> # ?!? 
+0

Imho它会更好地突出差异并提供复制的示例数据,而不是强调“我不知道......我无法理解”多次。 :-) – lukeA

+0

是的,没错。我将准备一个带有重要文档和脚本部分的zip文件,以便尽快将其附加到此处。对不起。 –

+0

完成。附加的示例数据和句子中的压力水平降低了一点。 :) –

回答

0

的问题似乎与流行问题“7.31 Why doesn’t R think these numbers are equal?”:可以精确表示

全国唯一号码在R的数字类型中, 是整数和分母,其分母是2的幂。所有 其他数字在内部四舍五入为(通常)53二进制y数字 准确性。其结果是,两个浮点数将不可靠 相等,除非他们已经被相同的算法来计算,而不是 总是偶数然后

鉴于

(x <- seq(0.45, 0.6, 0.05)) 
# [1] 0.45 0.50 0.55 0.60 
(y <- seq(0.45, 0.8, 0.05)) 
# [1] 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 

然后

x==y 
# [1] TRUE TRUE TRUE FALSE FALSE FALSE FALSE FALSE 
x[4]==y[4] 
# [1] FALSE 
x[4]-y[4] 
# [1] -1.110223e-16 
x[3]-y[3] 
# [1] 0 

由于

MASS::as.fractions(x) 
# [1] 9/20 1/2 11/20 3/5 

我猜这两个.5在这里是可靠的。因此,您的功能可能会产生不同的结果。

+1

非常感谢lukeA。在你的回应之后,我设法通过创建具有整数值的RST向量来解决这个问题:'RST <-seq(45,60,5)',并且在调用该函数时,我输入100之间的RST值: tdm.train < - lapply(Room_name,TDM_roomRST,dest.train,(RST [p]/100))'。因此我得到的结果在每种情况下都是完全相同的。说真的,我真的很赞同你的帮助和暗示。我希望你是最好的。问候,谢谢。 –

相关问题