2016-03-01 36 views
0

所以我的代码(粘贴)几乎做我想要的。相反,它涵盖了29/30页,然后遗漏了最后一页。此外,我最好让它超越,但网站没有它的按钮(当你在链接中手动填写页面= 31时,页面实际上可以工作)。当Depth_Limit是29这一切都很好,但在30,我得到的命令提示符下以下错误:最后一页不在scrapy中显示

File "C:\Users\Ewald\Scrapy\OB\OB\spiders\spider_OB.py", line 23, in parse 
next_link = 'https://zoek.officielebekendmakingen.nl/' + s.xpath('//a[@class="volgende"]/@href').extract()[0] 
IndexError: list index out of range 

我已经试过各种方法,但他们都忽视了我......

class OB_Crawler(CrawlSpider): 
name = 'OB5' 
allowed_domains = ["https://www.officielebekendmakingen.nl/"] 
start_urls = ["https://zoek.officielebekendmakingen.nl/zoeken/resultaat/?zkt=Uitgebreid&pst=Tractatenblad|Staatsblad|Staatscourant|BladGemeenschappelijkeRegeling|ParlementaireDocumenten&vrt=Cybersecurity&zkd=InDeGeheleText&dpr=Alle&sdt=DatumPublicatie&ap=&pnr=18&rpp=10&_page=1&sorttype=1&sortorder=4"] 
custom_settings = { 
'BOT_NAME': 'OB-crawler', 
'DEPTH_LIMIT': 30, 
'DOWNLOAD_DELAY': 0.1 
} 

def parse(self, response): 
    s = Selector(response) 
    next_link = 'https://zoek.officielebekendmakingen.nl/' + s.xpath('//a[@class="volgende"]/@href').extract()[0] 
    if len(next_link): 
     yield self.make_requests_from_url(next_link) 
    posts = response.selector.xpath('//div[@class = "lijst"]/ul/li') 
    for post in posts: 
     i = TextPostItem() 
     i['title'] = ' '.join(post.xpath('a/@href').extract()).replace(';', '').replace(' ', '').replace('\r\n', '') 
     i['link'] = ' '.join(post.xpath('a/text()').extract()).replace(';', '').replace(' ', '').replace('\r\n', '') 
     i['info'] = ' '.join(post.xpath('a/em/text()').extract()).replace(';', '').replace(' ', '').replace('\r\n', '').replace(',', '-') 
     yield i  

回答

0

索引超出范围的错误是xpath不正确的结果(最终会调用空列表的第一项)。

改变你的“next_link = ...”来

next_link = 'https://zoek.officielebekendmakingen.nl/' + s.xpath('//a[contains(@class, "volgende")]/@href').extract()[0] 

您需要使用包含,它运行的谓词的搜索过滤器..你想要

什么
相关问题