2
我刚刚在R中开始使用tm包,似乎无法解决问题。 虽然我的分词器的功能似乎工作权:R中的TermDocumentMatrix - 仅创建1克克
uniTokenizer <- function(x) NGramTokenizer(x, Weka_control(min=1, max=1))
biTokenizer <- function(x) NGramTokenizer(x, Weka_control(min=2, max=2))
triTokenizer <- function(x) NGramTokenizer(x, Weka_control(min=3, max=3))
uniTDM <- TermDocumentMatrix(corpus, control=list(tokenize = uniTokenizer))
biTDM <- TermDocumentMatrix(corpus, control=list(tokenize = biTokenizer))
triTDM <- TermDocumentMatrix(corpus, control=list(tokenize = triTokenizer))
当我试图拉2克从biTDM,只有1克拿出...
findFreqTerms(biTDM, 50)
[1] "after" "and" "most" "the" "were" "years" "love"
[8] "you" "all" "also" "been" "did" "from" "get"
的同时, 2克的功能似乎是在机智:
x <- biTokenizer(corpus)
head(x)
[1] "c in" "in the" "the years"
[4] "years thereafter" "thereafter most" "most of"
包括[最小再现的示例](https://stackoverflow.com/questions/5963269/how-to-make-a-great-r-reproducible-example)在你的问题会增加你的机会得到答案。 – jsb