我正试图刮掉国会图书馆/托马斯网站。这个Python脚本旨在从他们的站点访问40个账单的示例(URL中的#1-40标识符)。我想解析每一条法律的正文,在正文/内容中搜索,提取链接到潜在的多个版本&后面。为什么Scrapy不能抓取或解析?
一旦在版本页面上我想解析每个立法的正文,请搜索正文/内容&提取链接到潜在章节&后面。
一旦在章节页面上我想解析法案的每个部分的正文。
我相信我的代码中的Rules/LinkExtractor段存在一些问题。 python代码正在执行,抓取启动url,但没有解析或任何后续任务。
三个问题:
- 一些法案不具有多个版本(和ERGO在
- 一些法案不有联系的部分,因为他们是如此之短的URL的主体部分没有链接,而一些不过是链接部分。
- 有些部分链接并不仅仅包含特定的部分内容,大部分内容都是之前或之后部分内容的只是多余的夹杂物。
我的问题是,为什么Scrapy不能抓取或解析?
from scrapy.item import Item, Field
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
class BillItem(Item):
title = Field()
body = Field()
class VersionItem(Item):
title = Field()
body = Field()
class SectionItem(Item):
body = Field()
class Lrn2CrawlSpider(CrawlSpider):
name = "lrn2crawl"
allowed_domains = ["thomas.loc.gov"]
start_urls = ["http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.%s:" % bill for bill in xrange(000001,00040,00001) ### Sample of 40 bills; Total range of bills is 1-5767
]
rules = (
# Extract links matching /query/ fragment (restricting tho those inside the content body of the url)
# and follow links from them (since no callback means follow=True by default).
# Desired result: scrape all bill text & in the event that there are multiple versions, follow them & parse.
Rule(SgmlLinkExtractor(allow=(r'/query/'), restrict_xpaths=('//div[@id="content"]')), callback='parse_bills', follow=True),
# Extract links in the body of a bill-version & follow them.
#Desired result: scrape all version text & in the event that there are multiple sections, follow them & parse.
Rule(SgmlLinkExtractor(restrict_xpaths=('//div/a[2]')), callback='parse_versions', follow=True)
)
def parse_bills(self, response):
hxs = HtmlXPathSelector(response)
bills = hxs.select('//div[@id="content"]')
scraped_bills = []
for bill in bills:
scraped_bill = BillItem() ### Bill object defined previously
scraped_bill['title'] = bill.select('p/text()').extract()
scraped_bill['body'] = response.body
scraped_bills.append(scraped_bill)
return scraped_bills
def parse_versions(self, response):
hxs = HtmlXPathSelector(response)
versions = hxs.select('//div[@id="content"]')
scraped_versions = []
for version in versions:
scraped_version = VersionItem() ### Version object defined previously
scraped_version['title'] = version.select('center/b/text()').extract()
scraped_version['body'] = response.body
scraped_versions.append(scraped_version)
return scraped_versions
def parse_sections(self, response):
hxs = HtmlXPathSelector(response)
sections = hxs.select('//div[@id="content"]')
scraped_sections = []
for section in sections:
scraped_section = SectionItem() ## Segment object defined previously
scraped_section['body'] = response.body
scraped_sections.append(scraped_section)
return scraped_sections
spider = Lrn2CrawlSpider()
是的,这确实有帮助,并且基本上删除最后一行“spider = [...]”确实允许脚本运行。我仍然困惑为什么?当我在调试中运行脚本时,它告诉我在“规则([...]”)上出现语法错误,这就是为什么我说我相信问题出在那里。 我刚发现这个脚本很奇怪运行但不执行任务;调试指向了错误的方向吗?也许我错了 无论如何,是的,这对我有很大的帮助。 –