2016-02-04 116 views
1

我不知道问题出在哪里,因为我是scrapy的新手,可能超级容易修复。谢谢你的帮助!为什么我的scrapy蜘蛛没有刮掉任何东西?

我的蜘蛛:

from scrapy.spiders import CrawlSpider, Rule 
from scrapy.selector import HtmlXPathSelector 
from scrapy.linkextractors import LinkExtractor 
from scrapy.item import Item 

class ArticleSpider(CrawlSpider): 
    name = "article" 
    allowed_domains = ["economist.com"] 
    start_urls = ['http://www.economist.com/sections/science-technology'] 

    rules = [ 
     Rule(LinkExtractor(restrict_xpaths='//article'), callback='parse_item', follow=True), 
    ] 

    def parse_item(self, response): 
     for sel in response.xpath('//div/article'): 
      item = scrapy.Item() 
      item ['title'] = sel.xpath('a/text()').extract() 
      item ['link'] = sel.xpath('a/@href').extract() 
      item ['desc'] = sel.xpath('text()').extract() 
      return item 

项目:

import scrapy 

class EconomistItem(scrapy.Item): 
    title = scrapy.Field() 
    link = scrapy.Field() 
    desc = scrapy.Field() 

日志的一部分:

INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
Crawled (200) <GET http://www.economist.com/sections/science-technology> (referer: None) 

编辑:

后,我加入由alecxe另一个问题提出的修改发生:

登录:

[scrapy] DEBUG: Crawled (200) <GET http://www.economist.com/news/science-and-technology/21688848-stem-cells-are-starting-prove-their-value-medical-treatments-curing-multiple> (referer: http://www.economist.com/sections/science-technology) 
2016-02-04 14:05:01 [scrapy] DEBUG: Crawled (200) <GET http://www.economist.com/news/science-and-technology/21689501-beating-go-champion-machine-learning-computer-says-go> (referer: http://www.economist.com/sections/science-technology) 
2016-02-04 14:05:02 [scrapy] ERROR: Spider error processing <GET http://www.economist.com/news/science-and-technology/21688848-stem-cells-are-starting-prove-their-value-medical-treatments-curing-multiple> (referer: http://www.economist.com/sections/science-technology) 
Traceback (most recent call last): 
    File "/usr/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback 
    yield next(it) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output 
    for x in result: 
    File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr> 
    return (_set_referer(r) for r in result or()) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr> 
    return (r for r in result or() if _filter(r)) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 54, in <genexpr> 
    return (r for r in result or() if _filter(r)) 
    File "/usr/local/lib/python2.7/site-packages/scrapy/spiders/crawl.py", line 67, in _parse_response 
    cb_res = callback(response, **cb_kwargs) or() 
    File "/Users/FvH/Desktop/Python/projects/economist/economist/spiders/article.py", line 18, in parse_item 
    item = scrapy.Item() 
NameError: global name 'scrapy' is not defined 

设置:

BOT_NAME = 'economist' 

    SPIDER_MODULES = ['economist.spiders'] 
    NEWSPIDER_MODULE = 'economist.spiders' 
    USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36" 

如果我想将数据导出到CSV文件时,它显然只是空的。

感谢

回答

2

parse_item不正确缩进,应该是:

class ArticleSpider(CrawlSpider): 
    name = "article" 
    allowed_domains = ["economist.com"] 
    start_urls = ['http://www.economist.com/sections/science-technology'] 

    rules = [ 
     Rule(LinkExtractor(allow=r'Items'), callback='parse_item', follow=True), 
    ] 

    def parse_item(self, response): 
     for sel in response.xpath('//div/article'): 
      item = scrapy.Item() 
      item ['title'] = sel.xpath('a/text()').extract() 
      item ['link'] = sel.xpath('a/@href').extract() 
      item ['desc'] = sel.xpath('text()').extract() 
      return item 

有两点需要从一边修复:

  • 链接提取部分,应固定匹配文章链接:

    Rule(LinkExtractor(restrict_xpaths='//article'), callback='parse_item', follow=True), 
    
  • 你需要指定USER_AGENT setting假装是一个真正的浏览器。否则,response将不包含的文章列表:

    USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36" 
    
+0

感谢alecxe我说你说的话和,但因为有其他错误,现在显然我做错了。谢谢 – peter

+0

@peter你只需要在蜘蛛内部有'import scrapy'。或者,我认为你的意思是初始化项目中定义的项目,而不是'scrapy.Item()'。 – alecxe

0

你只有进口产品(不是所有的scrapy模块):

from scrapy.item import Item 

因此,而不是在这里使用scrapy.Item:

for sel in response.xpath('//div/article'): 
     item = scrapy.Item() 
     item ['title'] = sel.xpath('a/text()').extract() 

你应该只使用项目:

for sel in response.xpath('//div/article'): 
     item = Item() 
     item ['title'] = sel.xpath('a/text()').extract() 

或导入您自己的物品使用它。这应该工作(不要忘记你的项目的名称,以取代PROJECT_NAME):

from project_name.items import EconomistItem 
... 
for sel in response.xpath('//div/article'): 
     item = EconomistItem() 
     item ['title'] = sel.xpath('a/text()').extract()