2016-05-11 77 views
0

为了尊重他们对网页抓取的指导,我在中等大小的网站上进行了非常缓慢的抓取。这种情况意味着我需要能够暂停和恢复我的蜘蛛。到目前为止,我在命令行部署spider时启用了持久性:scrapy crawl ngamedallions -s JOBDIR=pass1 -o items.csv暂停和恢复Scrapy Spider的问题

昨晚,这似乎是在做伎俩。我测试了我的蜘蛛,发现当我把它完全关闭时,我可以再次启动它,并在我离开的地方重新开始爬行。然而,今天,蜘蛛从一开始就开始。我已经检查了我的pass1目录的内容,并且我的requests.seen文件有一些内容,即使对于我昨晚爬过的3000个页面,1600行看起来有点轻微。

无论如何,有没有人知道我要去哪里错,因为我试图恢复我的蜘蛛?

更新

我继续手动跳过我的蜘蛛进取,继续昨天的抓取。当我尝试用相同的命令关闭并恢复蜘蛛时(见上文),它工作。我的日志的开始反映了蜘蛛认识到爬行正在恢复。

2016-05-11 10:59:36 [scrapy] INFO: Scrapy 1.0.5.post4+g4b324a8 started (bot: ngamedallions) 
2016-05-11 10:59:36 [scrapy] INFO: Optional features available: ssl, http11 
2016-05-11 10:59:36 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'ngamedallions.spiders', 'FEED_URI': 'items.csv', 'SPIDER_MODULES': ['ngamedallions.spiders'], 'BOT_NAME': 'ngamedallions', 'USER_AGENT': 'ngamedallions', 'FEED_FORMAT': 'csv', 'DOWNLOAD_DELAY': 10} 
2016-05-11 10:59:36 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState 
2016-05-11 10:59:36 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2016-05-11 10:59:36 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2016-05-11 10:59:36 [scrapy] INFO: Enabled item pipelines: NgamedallionsCsvPipeline, NgamedallionsImagesPipeline 
2016-05-11 10:59:36 [scrapy] INFO: Spider opened 
2016-05-11 10:59:36 [scrapy] INFO: Resuming crawl (3 requests scheduled) 

当我尝试恢复第二正常关闭(暂停 - 重启 - 暂停 - 重启)后,蜘蛛,但是,它重新开始抓取。在这种情况下,日志的开始会出现,但主要的一点是,蜘蛛没有报告将抓取视为已恢复。

2016-05-11 11:19:10 [scrapy] INFO: Scrapy 1.0.5.post4+g4b324a8 started (bot: ngamedallions) 
2016-05-11 11:19:10 [scrapy] INFO: Optional features available: ssl, http11 
2016-05-11 11:19:10 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'ngamedallions.spiders', 'FEED_URI': 'items.csv', 'SPIDER_MODULES': ['ngamedallions.spiders'], 'BOT_NAME': 'ngamedallions', 'USER_AGENT': 'ngamedallions', 'FEED_FORMAT': 'csv', 'DOWNLOAD_DELAY': 10} 
2016-05-11 11:19:11 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState 
2016-05-11 11:19:11 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2016-05-11 11:19:11 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2016-05-11 11:19:11 [scrapy] INFO: Enabled item pipelines: NgamedallionsCsvPipeline, NgamedallionsImagesPipeline 
2016-05-11 11:19:11 [scrapy] INFO: Spider opened 

回答

1

Scrapy避免重复URL抓取,herehere你可以找到更多关于它的信息。

dont_filter(boolean) - 表示此请求不应该由调度程序过滤为 。当您想要多次执行相同的请求时,会使用此选项来忽略重复过滤器。小心使用 ,否则您将进入爬行循环。默认为False。

而且,看看这个question

+0

是不是避免重复URL抓取基本上我想要的行为?似乎重复过滤功能目前处于启用状态。我的日志报告'2016-05-11 10:59:37 [scrapy] DEBUG:过滤重复请求: - 第一页被抓取后,将不会显示更多重复项(请参阅DUPEFILTER_DEBUG以显示所有重复项)。 – Tric