2016-11-11 215 views
0

好吧,我有一个包含多行(当前超过40k)的CSV文件。由于大量的行数,我需要逐个读取并执行一系列操作。这是第一个问题。第二个是:如何读取csv文件并将其编码为utf-8?其次是如何读取utf-8中的文件,例如:csv documentation。 Mesmo utilizando a classs class UTF8Recoder: o retorno no meu printé\xe9 s\xf3。有人可以帮我解决这个问题吗?Python - CSV阅读器 - 每次读取一行

import preprocessing 
import pymongo 
import csv,codecs,cStringIO 
from pymongo import MongoClient 
from unicodedata import normalize 
from preprocessing import PreProcessing 

class UTF8Recoder: 
    def __init__(self, f, encoding): 
     self.reader = codecs.getreader(encoding)(f) 
    def __iter__(self): 
     return self 
    def next(self): 
     return self.reader.next().encode("utf-8") 

class UnicodeReader: 
    def __init__(self, f, dialect=csv.excel, encoding="utf-8-sig", **kwds): 
     f = UTF8Recoder(f, encoding) 
     self.reader = csv.reader(f, dialect=dialect, **kwds) 
    def next(self): 
     '''next() -> unicode 
     This function reads and returns the next line as a Unicode string. 
     ''' 
     row = self.reader.next() 
     return [unicode(s, "utf-8") for s in row] 
    def __iter__(self): 
     return self 

with open('data/MyCSV.csv','rb') as csvfile: 
    reader = UnicodeReader(csvfile) 
    #writer = UnicodeWriter(fout,quoting=csv.QUOTE_ALL) 
    for row in reader: 
     print row 

def status_processing(corpus): 

    myCorpus = preprocessing.PreProcessing() 
    myCorpus.text = corpus 

    print "Starting..." 
    myCorpus.initial_processing() 
    print "Done." 
    print "----------------------------" 

编辑1:S Ringne先生的解决方案。但是现在,我无法执行我的def中的操作。下面是新的代码:

for csvfile in pd.read_csv('data/AracajuAgoraNoticias_facebook_statuses.csv',encoding='utf-8',sep=',', header='infer',engine='c', chunksize=2): 

    def status_processing(csvfile): 

     myCorpus = preprocessing.PreProcessing() 
     myCorpus.text = csvfile 

     print "Fazendo o processo inicial..." 
     myCorpus.initial_processing() 
     print "Feito." 
     print "----------------------------" 

,并在脚本的末尾:

def main(): 
    status_processing(csvfile) 

main() 

输出是当我使用BeautifulSoup删除链接:

ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). 

回答

0

您可以将您的csv储存在熊猫中,并进行更进一步的操作,速度会更快。

import pandas as pd 
df = pd.read_csv('path_to_file.csv',encoding='utf-8',header = 'infer',engine = 'c') 
+0

嗯,但如何逐行阅读?在这种情况下,我读了一行,在'def status_processing'中执行操作,然后我又回头阅读另一行。纠正单词的过程是非常昂贵的,一次全部阅读并去做这些操作。 –

+0

@ LeandroS.Matos在pd.read_csv中使用chunksize:for df('matrix.txt',sep =',',header = None,chunksize = 1): – Shubham

+0

@ LeandroS.Matos:http://stackoverflow.com/问题/ 29334463/pandas-read-csv-file-line-by-line – Shubham

0

这里有一个简单的模式来以UTF-8逐行读取:

with open(filename, 'r', encoding="utf-8") as csvfile: 
    spamreader = csv.reader(csvfile, delimiter=',', quotechar='"') 
    for row in spamreader: 
     # your operations go here