2016-07-14 34 views
0

我写了一个蜘蛛,其唯一目的是从http://www.funda.nl/koop/amsterdam/中提取一个数字,即从底部寻呼机的最大页数(例如, ,下面例子中的数字255)。Scrapy feed输出包含多次而不是一次的期望输出

enter image description here

我成功地做到这一点使用基于正则表达式,这些网页的网址都符合LinkExtractor。蜘蛛如下图所示:

import scrapy 
from scrapy.spiders import CrawlSpider, Rule 
from scrapy.linkextractors import LinkExtractor 
from scrapy.crawler import CrawlerProcess 
from Funda.items import MaxPageItem 

class FundaMaxPagesSpider(CrawlSpider): 
    name = "Funda_max_pages" 
    allowed_domains = ["funda.nl"] 
    start_urls = ["http://www.funda.nl/koop/amsterdam/"] 

    le_maxpage = LinkExtractor(allow=r'%s+p\d+' % start_urls[0]) # Link to a page containing thumbnails of several houses, such as http://www.funda.nl/koop/amsterdam/p10/ 

    rules = (
    Rule(le_maxpage, callback='get_max_page_number'), 
    ) 

    def get_max_page_number(self, response): 
     links = self.le_maxpage.extract_links(response) 
     max_page_number = 0             # Initialize the maximum page number 
     page_numbers=[] 
     for link in links: 
      if link.url.count('/') == 6 and link.url.endswith('/'):   # Select only pages with a link depth of 3 
       page_number = int(link.url.split("/")[-2].strip('p'))  # For example, get the number 10 out of the string 'http://www.funda.nl/koop/amsterdam/p10/' 
       page_numbers.append(page_number) 
       # if page_number > max_page_number: 
       #  max_page_number = page_number       # Update the maximum page number if the current value is larger than its previous value 
     max_page_number = max(page_numbers) 
     print("The maximum page number is %s" % max_page_number) 
     yield {'max_page_number': max_page_number} 

如果我跑这跟饲料输出通过在命令行中输入scrapy crawl Funda_max_pages -o funda_max_pages.json,生成的JSON文件看起来像这样:

[ 
{"max_page_number": 257}, 
{"max_page_number": 257}, 
{"max_page_number": 257}, 
{"max_page_number": 257}, 
{"max_page_number": 257}, 
{"max_page_number": 257}, 
{"max_page_number": 257} 
] 

我觉得奇怪的是,字典输出7次而不是一次。毕竟,yield语句不在for循环之外。谁能解释这种行为?

回答

3
  1. 你的蜘蛛首先进入start_url。
  2. 使用LinkExtractor提取7个URL。
  3. 下载这7个网址中的每一个,并在每个网址上调用get_max_page_number
  4. 对于每个网址get_max_page_number返回一个字典。
0

作为一种变通方法,我写输出到一个文本文件来代替JSON提要输出:

import scrapy 
from scrapy.spiders import CrawlSpider, Rule 
from scrapy.linkextractors import LinkExtractor 
from scrapy.crawler import CrawlerProcess 

class FundaMaxPagesSpider(CrawlSpider): 
    name = "Funda_max_pages" 
    allowed_domains = ["funda.nl"] 
    start_urls = ["http://www.funda.nl/koop/amsterdam/"] 

    le_maxpage = LinkExtractor(allow=r'%s+p\d+' % start_urls[0]) # Link to a page containing thumbnails of several houses, such as http://www.funda.nl/koop/amsterdam/p10/ 

    rules = (
    Rule(le_maxpage, callback='get_max_page_number'), 
    ) 

    def get_max_page_number(self, response): 
     links = self.le_maxpage.extract_links(response) 
     max_page_number = 0             # Initialize the maximum page number 
     for link in links: 
      if link.url.count('/') == 6 and link.url.endswith('/'):   # Select only pages with a link depth of 3 
       print("The link is %s" % link.url) 
       page_number = int(link.url.split("/")[-2].strip('p'))  # For example, get the number 10 out of the string 'http://www.funda.nl/koop/amsterdam/p10/' 
       if page_number > max_page_number: 
        max_page_number = page_number       # Update the maximum page number if the current value is larger than its previous value 
     print("The maximum page number is %s" % max_page_number) 
     place_name = link.url.split("/")[-3]        # For example, "amsterdam" in 'http://www.funda.nl/koop/amsterdam/p10/' 
     print("The place name is %s" % place_name) 
     filename = str(place_name)+"_max_pages.txt"       # File name with as prefix the place name 
     with open(filename,'wb') as f: 
      f.write('max_page_number = %s' % max_page_number)    # Write the maximum page number to a text file 
     yield {'max_page_number': max_page_number} 

process = CrawlerProcess({ 
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)' 
}) 

process.crawl(FundaMaxPagesSpider) 
process.start() # the script will block here until the crawling is finished 

我也适应了蜘蛛来运行它作为一个脚本。该脚本将生成一行max_page_number: 257的文本文件amsterdam_max_pages.txt

+0

你仍然在爬行7个网址,但是你用'max_page_number:257'覆盖同一个文件7次... – Granitosaurus