2013-07-12 27 views
1

我正试图刮掉国会图书馆/托马斯网站。这个Python脚本旨在从他们的站点访问40个账单的示例(URL中的#1-40标识符)。我想解析每一条法律的正文,在正文/内容中搜索,提取链接到潜在的多个版本&后面。为什么Scrapy不能抓取或解析?

一旦在版本页面上我想解析每个立法的正文,请搜索正文/内容&提取链接到潜在章节&后面。

一旦在章节页面上我想解析法案的每个部分的正文。

我相信我的代码中的Rules/LinkExtractor段存在一些问题。 python代码正在执行,抓取启动url,但没有解析或任何后续任务。

三个问题:

  1. 一些法案不具有多个版本(和ERGO在
  2. 一些法案不有联系的部分,因为他们是如此之短的URL的主体部分没有链接,而一些不过是链接部分。
  3. 有些部分链接并不仅仅包含特定的部分内容,大部分内容都是之前或之后部分内容的只是多余的夹杂物。

我的问题是,为什么Scrapy不能抓取或解析?

from scrapy.item import Item, Field 
from scrapy.contrib.spiders import CrawlSpider, Rule 
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor 
from scrapy.selector import HtmlXPathSelector 

class BillItem(Item): 
    title = Field() 
    body = Field() 

class VersionItem(Item): 
    title = Field() 
    body = Field() 

class SectionItem(Item): 
    body = Field() 

class Lrn2CrawlSpider(CrawlSpider): 
    name = "lrn2crawl" 
    allowed_domains = ["thomas.loc.gov"] 
    start_urls = ["http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.%s:" % bill for bill in xrange(000001,00040,00001) ### Sample of 40 bills; Total range of bills is 1-5767 

    ] 

rules = (
     # Extract links matching /query/ fragment (restricting tho those inside the content body of the url) 
     # and follow links from them (since no callback means follow=True by default). 
     # Desired result: scrape all bill text & in the event that there are multiple versions, follow them & parse. 
     Rule(SgmlLinkExtractor(allow=(r'/query/'), restrict_xpaths=('//div[@id="content"]')), callback='parse_bills', follow=True), 

     # Extract links in the body of a bill-version & follow them. 
     #Desired result: scrape all version text & in the event that there are multiple sections, follow them & parse. 
     Rule(SgmlLinkExtractor(restrict_xpaths=('//div/a[2]')), callback='parse_versions', follow=True) 
    ) 

def parse_bills(self, response): 
    hxs = HtmlXPathSelector(response) 
    bills = hxs.select('//div[@id="content"]') 
    scraped_bills = [] 
    for bill in bills: 
     scraped_bill = BillItem() ### Bill object defined previously 
     scraped_bill['title'] = bill.select('p/text()').extract() 
     scraped_bill['body'] = response.body 
     scraped_bills.append(scraped_bill) 
    return scraped_bills 

def parse_versions(self, response): 
    hxs = HtmlXPathSelector(response) 
    versions = hxs.select('//div[@id="content"]') 
    scraped_versions = [] 
    for version in versions: 
     scraped_version = VersionItem() ### Version object defined previously 
     scraped_version['title'] = version.select('center/b/text()').extract() 
     scraped_version['body'] = response.body 
     scraped_versions.append(scraped_version) 
    return scraped_versions 

def parse_sections(self, response): 
    hxs = HtmlXPathSelector(response) 
    sections = hxs.select('//div[@id="content"]') 
    scraped_sections = [] 
    for section in sections: 
     scraped_section = SectionItem() ## Segment object defined previously 
     scraped_section['body'] = response.body 
     scraped_sections.append(scraped_section) 
    return scraped_sections 

spider = Lrn2CrawlSpider() 

回答

0

我刚刚固定压痕,除去在脚本的末尾spider = Lrn2CrawlSpider()线,通过scrapy runspider lrn2crawl.py跑了蜘蛛和它刮掉,如下链接,返回的项目 - 你的规则工作。

这里是我运行的是什么:

from scrapy.item import Item, Field 
from scrapy.contrib.spiders import CrawlSpider, Rule 
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor 
from scrapy.selector import HtmlXPathSelector 

class BillItem(Item): 
    title = Field() 
    body = Field() 

class VersionItem(Item): 
    title = Field() 
    body = Field() 

class SectionItem(Item): 
    body = Field() 

class Lrn2CrawlSpider(CrawlSpider): 
    name = "lrn2crawl" 
    allowed_domains = ["thomas.loc.gov"] 
    start_urls = ["http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.%s:" % bill for bill in xrange(000001,00040,00001) ### Sample of 40 bills; Total range of bills is 1-5767 

    ] 

    rules = (
      # Extract links matching /query/ fragment (restricting tho those inside the content body of the url) 
      # and follow links from them (since no callback means follow=True by default). 
      # Desired result: scrape all bill text & in the event that there are multiple versions, follow them & parse. 
      Rule(SgmlLinkExtractor(allow=(r'/query/'), restrict_xpaths=('//div[@id="content"]')), callback='parse_bills', follow=True), 

      # Extract links in the body of a bill-version & follow them. 
      #Desired result: scrape all version text & in the event that there are multiple sections, follow them & parse. 
      Rule(SgmlLinkExtractor(restrict_xpaths=('//div/a[2]')), callback='parse_versions', follow=True) 
     ) 

    def parse_bills(self, response): 
     hxs = HtmlXPathSelector(response) 
     bills = hxs.select('//div[@id="content"]') 
     scraped_bills = [] 
     for bill in bills: 
      scraped_bill = BillItem() ### Bill object defined previously 
      scraped_bill['title'] = bill.select('p/text()').extract() 
      scraped_bill['body'] = response.body 
      scraped_bills.append(scraped_bill) 
     return scraped_bills 

    def parse_versions(self, response): 
     hxs = HtmlXPathSelector(response) 
     versions = hxs.select('//div[@id="content"]') 
     scraped_versions = [] 
     for version in versions: 
      scraped_version = VersionItem() ### Version object defined previously 
      scraped_version['title'] = version.select('center/b/text()').extract() 
      scraped_version['body'] = response.body 
      scraped_versions.append(scraped_version) 
     return scraped_versions 

    def parse_sections(self, response): 
     hxs = HtmlXPathSelector(response) 
     sections = hxs.select('//div[@id="content"]') 
     scraped_sections = [] 
     for section in sections: 
      scraped_section = SectionItem() ## Segment object defined previously 
      scraped_section['body'] = response.body 
      scraped_sections.append(scraped_section) 
     return scraped_sections 

希望有所帮助。

+0

是的,这确实有帮助,并且基本上删除最后一行“spider = [...]”确实允许脚本运行。我仍然困惑为什么?当我在调试中运行脚本时,它告诉我在“规则([...]”)上出现语法错误,这就是为什么我说我相信问题出在那里。 我刚发现这个脚本很奇怪运行但不执行任务;调试指向了错误的方向吗?也许我错了 无论如何,是的,这对我有很大的帮助。 –

1

只是为了记录在案,与你的脚本的问题是,可变rules不是Lrn2CrawlSpider的范围内,因为它不共享相同的缩进,所以当alecxe固定的缩进变量rules成了现在的属性班上。稍后,继承的方法__init__()将读取该属性并编译规则并强制执行它们。

def __init__(self, *a, **kw): 
    super(CrawlSpider, self).__init__(*a, **kw) 
    self._compile_rules() 

擦掉最后一行与此无关。