我爬行大量的网址,并想知道是否有可能让scrapy不用'meta name =“robots”content =“noindex”'解析页面? 看看这里列出的拒绝规则http://doc.scrapy.org/en/latest/topics/link-extractors.html它看起来像拒绝规则只适用于URL。你可以让scrapy忽略xpath吗?Scrapy忽略noindex
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from wallspider.items import Website
class Spider(CrawlSpider):
name = "browsetest"
allowed_domains = ["www.mydomain.com"]
start_urls = ["http://www.mydomain.com",]
rules = (
Rule(SgmlLinkExtractor(allow=('/browse/')), callback="parse_items", follow= True),
Rule(SgmlLinkExtractor(allow=(),unique=True,deny=('/[1-9]$', '(bti=)[1-9]+(?:\.[1-9]*)?', '(sort_by=)[a-zA-Z]', '(sort_by=)[1-9]+(?:\.[1-9]*)?', '(ic=32_)[1-9]+(?:\.[1-9]*)?', '(ic=60_)[0-9]+(?:\.[0-9]*)?', '(search_sort=)[1-9]+(?:\.[1-9]*)?', 'browse-ng.do\?', '/page/', '/ip/', 'out\+value', 'fn=', 'customer_rating', 'special_offers', 'search_sort=&', 'facet='))),
)
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//html')
items = []
for site in sites:
item = Website()
item['url'] = response.url
item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
item['robots'] = site.select('//meta[@name="robots"]/@content').extract()
items.append(item)
return items
你想跳过检索这些页面?如果是这样,那是不可能的,因为为了查找元机器人,您必须检索该页面。 – Rolando
对不起,我改写了我的问题。是否有可能让它解析包含'meta name =“robots”content =“noindex”'的网址? –
我可以否认xpath吗? –