给出一个CSV文件是这样的:
$ cat test.tsv
DocID Text WhateverAnnotations
1 Foo bar bar dot dot dot
2 bar bar black sheep dot dot dot dot
$ cut -f2 test.tsv
Text
Foo bar bar
bar bar black sheep
而在代码:
$ python
>>> import pandas as pd
>>> pd.read_csv('test.tsv', delimiter='\t')
DocID Text WhateverAnnotations
0 1 Foo bar bar dot dot dot
1 2 bar bar black sheep dot dot dot dot
>>> df = pd.read_csv('test.tsv', delimiter='\t')
>>> df['Text']
0 Foo bar bar
1 bar bar black sheep
Name: Text, dtype: object
要使用pipe
在spacy:
>>> import spacy
>>> nlp = spacy.load('en')
>>> for parsed_doc in nlp.pipe(iter(df['Text']), batch_size=1, n_threads=4):
... print (parsed_doc[0].text, parsed_doc[0].tag_)
...
Foo NNP
bar NN
要使用pandas.DataFrame.apply()
:
>>> df['Parsed'] = df['Text'].apply(nlp)
>>> df['Parsed'].iloc[0]
Foo bar bar
>>> type(df['Parsed'].iloc[0])
<class 'spacy.tokens.doc.Doc'>
>>> df['Parsed'].iloc[0][0].tag_
'NNP'
>>> df['Parsed'].iloc[0][0].text
'Foo'
以基准。
第一个重复的行200万次:
$ cat test.tsv
DocID Text WhateverAnnotations
1 Foo bar bar dot dot dot
2 bar bar black sheep dot dot dot dot
$ tail -n 2 test.tsv > rows2
$ perl -ne 'print "$_" x1000000' rows2 > rows2000000
$ cat test.tsv rows2000000 > test-2M.tsv
$ wc -l test-2M.tsv
2000003 test-2M.tsv
$ head test-2M.tsv
DocID Text WhateverAnnotations
1 Foo bar bar dot dot dot
2 bar bar black sheep dot dot dot dot
1 Foo bar bar dot dot dot
1 Foo bar bar dot dot dot
1 Foo bar bar dot dot dot
1 Foo bar bar dot dot dot
1 Foo bar bar dot dot dot
1 Foo bar bar dot dot dot
1 Foo bar bar dot dot dot
[nlppipe.py]:
import time
import pandas as pd
import spacy
df = pd.read_csv('test-2M.tsv', delimiter='\t')
nlp = spacy.load('en')
start = time.time()
for parsed_doc in nlp.pipe(iter(df['Text']), batch_size=1000, n_threads=4):
x = parsed_doc[0].tag_
print (time.time() - start)
[dfapply.py]:
import time
import pandas as pd
import spacy
df = pd.read_csv('test-2M.tsv', delimiter='\t')
nlp = spacy.load('en')
start = time.time()
df['Parsed'] = df['Text'].apply(nlp)
for doc in df['Parsed']:
x = doc[0].tag_
print (time.time() - start)
但是,如果读入一个数据帧,可以使用'df.apply()'或等价的方法将行输入到'nlp'中,而不是迭代。 – alexis