2017-05-25 63 views
5

这是我使用Twitter的语义分析,代码: -如何将word_tokenize NLTK用于Twitter数据的熊猫数据框?

import pandas as pd 
import datetime 
import numpy as np 
import re 
from nltk.tokenize import word_tokenize 
from nltk.corpus import stopwords 
from nltk.stem.wordnet import WordNetLemmatizer 
from nltk.stem.porter import PorterStemmer 

df=pd.read_csv('twitDB.csv',header=None, 
sep=',',error_bad_lines=False,encoding='utf-8') 

hula=df[[0,1,2,3]] 
hula=hula.fillna(0) 
hula['tweet'] = hula[0].astype(str) 
+hula[1].astype(str)+hula[2].astype(str)+hula[3].astype(str) 
hula["tweet"]=hula.tweet.str.lower() 

ho=hula["tweet"] 
ho = ho.replace('\s+', ' ', regex=True) 
ho=ho.replace('\.+', '.', regex=True) 
special_char_list = [':', ';', '?', '}', ')', '{', '('] 
for special_char in special_char_list: 
ho=ho.replace(special_char, '') 
print(ho) 

ho = ho.replace('((www\.[\s]+)|(https?://[^\s]+))','URL',regex=True) 
ho =ho.replace(r'#([^\s]+)', r'\1', regex=True) 
ho =ho.replace('\'"',regex=True) 

lem = WordNetLemmatizer() 
stem = PorterStemmer() 
fg=stem.stem(a) 

eng_stopwords = stopwords.words('english') 
ho = ho.to_frame(name=None) 
a=ho.to_string(buf=None, columns=None, col_space=None, header=True, 
index=True, na_rep='NaN', formatters=None, float_format=None, 
sparsify=False, index_names=True, justify=None, line_width=None, 
max_rows=None, max_cols=None, show_dimensions=False) 
wordList = word_tokenize(fg)          
wordList = [word for word in wordList if word not in eng_stopwords] 
print (wordList) 

输入即: -

           tweet 
0  1495596971.6034188::automotive auto ebc greens... 
1  1495596972.330948::new free stock photo of cit... 

获得输出(词表)格式如下: -

tweet 
0 
1495596971.6034188 
: 
:automotive 
auto 

我只希望以行格式输出一行。我该怎么做? 如果你有更好的twitter语义分析代码,请与我分享。

回答

5

简而言之:

df['Text'].apply(word_tokenize) 

或者,如果你想添加另一列来存储字符串的标记化列表:

df['tokenized_text'] = df['Text'].apply(word_tokenize) 

有专门为Twitter文字写断词,看到http://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.casual

使用nltk.tokenize.TweetTokenizer

from nltk.tokenize import TweetTokenizer 
tt = TweetTokenizer() 
df['Text'].apply(tt.tokenize) 

类似:

+0

感谢您的帮助。现在正在工作。 :) – Vic13

+0

我很高兴答案帮助。 – alvas

+0

[link](https://stackoverflow.com/questions/44157005/how-can-i-enlarge-the-below-output-in-python-because-want-to-use-it-as-an-input )你知道这个问题的答案吗? – Vic13