2017-10-06 63 views
0

试图找出scrapy如何工作并使用它来查找有关论坛的信息。Scrapy不产生结果(爬行0页)

items.py

import scrapy 


class BodybuildingItem(scrapy.Item): 
    # define the fields for your item here like: 
    title = scrapy.Field() 
    pass 

spider.py

from scrapy.spider import BaseSpider 
from scrapy.selector import Selector 
from bodybuilding.items import BodybuildingItem 

class BodyBuildingSpider(BaseSpider): 
    name = "bodybuilding" 
    allowed_domains = ["forum.bodybuilding.nl"] 
    start_urls = [ 
     "https://forum.bodybuilding.nl/fora/supplementen.22/" 
    ] 

    def parse(self, response): 
     responseSelector = Selector(response) 
     for sel in responseSelector.css('li.past.line.event-item'): 
      item = BodybuildingItem() 
      item['title'] = sel.css('a.data-previewUrl::text').extract() 
      yield item 

我试图从这个例子中得到的职衔论坛是这样的:https://forum.bodybuilding.nl/fora/supplementen.22/

但是我一直没有收到结果:

class BodyBuildingSpider(BaseSpider): 2017-10-07 00:42:28 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: bodybuilding) 2017-10-07 00:42:28 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'bodybuilding.spiders', 'SPIDER_MODULES': ['bodybuilding.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'bodybuilding'} 2017-10-07 00:42:28 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.corestats.CoreStats'] 2017-10-07 00:42:28 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2017-10-07 00:42:28 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2017-10-07 00:42:28 [scrapy.middleware] INFO: Enabled item pipelines: [] 2017-10-07 00:42:28 [scrapy.core.engine] INFO: Spider opened 2017-10-07 00:42:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2017-10-07 00:42:28 [scrapy.core.engine] DEBUG: Crawled (404) https://forum.bodybuilding.nl/robots.txt> (referer: None) 2017-10-07 00:42:29 [scrapy.core.engine] DEBUG: Crawled (200) https://forum.bodybuilding.nl/fora/supplementen.22/> (referer: None) 2017-10-07 00:42:29 [scrapy.core.engine] INFO: Closing spider (finished) 2017-10-07 00:42:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 469, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 22878, 'downloader/response_count': 2, 'downloader/response_status_count/200': 1, 'downloader/response_status_count/404': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2017, 10, 6, 22, 42, 29, 223305), 'log_count/DEBUG': 2, 'log_count/INFO': 7, 'memusage/max': 31735808, 'memusage/startup': 31735808, 'response_received_count': 2, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2017, 10, 6, 22, 42, 28, 816043)} 2017-10-07 00:42:29 [scrapy.core.engine] INFO: Spider closed (finished)

我一直在这里跟随指南:http://blog.florian-hopf.de/2014/07/scrapy-and-elasticsearch.html

更新1:

正如有人告诉我,我需要更新我的代码,以新的标准,我做到了,但它并没有改变结果:

from scrapy.spider import BaseSpider 
from scrapy.selector import Selector 
from bodybuilding.items import BodybuildingItem 

class BodyBuildingSpider(BaseSpider): 
    name = "bodybuilding" 
    allowed_domains = ["forum.bodybuilding.nl"] 
    start_urls = [ 
     "https://forum.bodybuilding.nl/fora/supplementen.22/" 
    ] 

    def parse(self, response): 
     for sel in response.css('li.past.line.event-item'): 
      item = BodybuildingItem() 
      yield {'title': title.css('a.data-previewUrl::text').extract_first()} 
      yield item 

最近更新与修复

后一些很好的帮助,我终于得到它与这种蜘蛛工作:

import scrapy 

class BlogSpider(scrapy.Spider): 
    name = 'bodybuilding' 
    start_urls = ['https://forum.bodybuilding.nl/fora/supplementen.22/'] 

    def parse(self, response): 
     for title in response.css('h3.title'): 
      yield {'title': title.css('a::text').extract_first()} 
      next_page_url = response.xpath("//a[text()='Volgende >']/@href").extract_first() 
      if next_page_url: 
       yield response.follow(next_page_url, callback=self.parse) 
+0

你应该使用'response.css('li.past.line.event-item')',并且不需要'responseSelector = Selector(response)'。此外,您使用的CSS不再有效,因此您需要根据最新网页 –

+0

更新这些内容。我想我现在已经更新了所有内容,但仍然没有任何结果。查看更新。 – Nerotix

+0

问题是在页面上没有什么匹配'li.past.line.event-item' –

回答

1

您应该使用response.css('li.past.line.event-item'),也没有必要responseSelector = Selector(response)

而且你正在使用li.past.line.event-item的CSS,没有更有效的,所以你需要更新首款基于最新的网页

那些为了得到下一个页面的网址,你可以使用

>>> response.css("a.text::attr(href)").extract_first() 
'fora/supplementen.22/page-2' 

而且然后使用response.follow按照此相对URL

编辑-2:下一页处理校正

以前的编辑没有把它前面的页面URL匹配的下一个页面上,因为工作,所以你需要使用下面

next_page_url = response.xpath("//a[text()='Volgende >']/@href").extract_first() 
if next_page_url: 
    yield response.follow(next_page_url, callback=self.parse) 

编辑-1:下一页处理

next_page_url = response.css("a.text::attr(href)").extract_first() 
if next_page_url: 
    yield response.follow(next_page_url, callback=self.parse) 
+0

这是它看起来像现在: 在response.css next_page( “a.text :: ATTR(HREF)”)extract_first(): 产量response.follow(next_page,self.parse) 但得到一个错误“TypeError:'NoneType'对象不可迭代”。 还告诉我,11行有一个问题,这是for循环,我只是表现。 – Nerotix

+0

@Nerotix,请编辑 –

+0

嗯奇怪的事情发生在我补充一点..它再次首先做的第一页,然后是第二个,然后在第一,但它永远不会再跳跃,然后第2页 – Nerotix