2015-12-03 36 views
1

我对python不是很熟悉,请耐心等待。 我有一个scrapy爬虫,它的工作原理应该是这样,但现在我需要做一个新的爬虫,但是这次它应该抓取登录的会话。 所以我的scrapy使用start_urls从站点地图获得的url列表,它应该向登录表单提出请求,然后,如果登录,它应该开始解析我的列表...登录解析url列表后的Scrapy

这是我的代码到目前为止:

class StockPricesSpider(Spider): 
    name = "logged-in" 
    allowed_domains = ["example.com"] 
    d = strftime("%Y-%m-%d", gmtime()) 
    start_urls = ['https://www.example.com/customer/account/login/'] 

    def parse(self, response): 
     return [FormRequest.from_response(response, 
        formdata={'username': 'myuser', 'password': 'mypass'}, 
        callback=self.after_login)] 

    def after_login(self, response): 
     # check login succeed before going on 
     if "Invalid login or password." in response.body: 
      self.log("Login failed", level=log.ERROR) 
      return 
     else: 
      logging.log(logging.INFO,'Logged in and start parsing') 
      return Request("http://www.example.com/", callback=self.parse_products) 

    def parse_products(self, response): 
     f = open("data/sitemaps/urls04102015.txt") 
     start_urls = [url.strip() for url in f.readlines()] 
     f.close() 
     d = strftime("%Y-%m-%d", gmtime()) 
     if os.path.exists("data/results/stock_"+d+".csv"): 
      os.remove("data/results/stock_"+d+".csv")    

     sel = Selector(response) 
     separator = ";" 
     items = [] 

     item = MyPrices() 
     sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract() 
     logging.log(logging.INFO, sku) 
     if len(sku) > 0:   
      item['sku'] = "med_" + sel.xpath('.//strong[@itemprop="productID"]/text()').extract()[0].strip() 
      ... 
     items.append(item)   
     return items 

所以这是行不通的,因为我没有正确调用解析器。 所以基本上,我没有得到错误,但网址也没有得到解析。 因此,登录工作,我成功登录,但之后(登录后)我怎么做scrapy做(解析url列表)?

编辑 我发现一个新的方法来解决我的问题,但它也不能正常工作。请帮我调试这个(或第一种方法)

class StockPricesSpiderX(InitSpider): 
    name = "logged-in" 
    allowed_domains = ["example.com"] 
    login_page = 'https://www.example.com/ro/customer/account/login/' 
    d = strftime("%Y-%m-%d", gmtime()) 
    f = open("data/sitemaps/urls04102015.txt") 
    start_urls = [url.strip() for url in f.readlines()] 
    f.close() 
    if os.path.exists("data/results/stock_"+d+".csv"): 
     os.remove("data/results/stock_"+d+".csv") 

    def init_request(self): 
     """ Called before crawler starts """ 
     logging.log(logging.INFO, 'before crawler starts...') 
     return Request(url=self.login_page, callback=self.login) 

    def login(self, response): 
     """ Generate login request """ 
     logging.log(logging.INFO, 'do login...') 
     return FormRequest.from_response(response, 
             formdata={'name':'myuser','password':'mypass'}, 
             callback=self.check_login_response) 
    def check_login_response(self,response): 
     """ Check the response returned by login request to see if we are logged in """ 
     if "Invalid login or password." in response.body: 
      logging.log(logging.INFO,'... BAD LOGIN ...') 
     else: 
      logging.log(logging.INFO, 'GOOD LOGIN... initialize') 
      self.initialized() 

    def parse_item(self, response): 
     sel = Selector(response) 
     separator = ";" 
     items = [] 
     item = StockPrices() 
     sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract() 
     logging.log(logging.INFO, sku) 
     ... 
     items.append(item)   
     return items 

执行的日志显示此:

2015-12-03 14:54:16 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot) 
2015-12-03 14:54:16 [scrapy] INFO: Optional features available: ssl, http11 
2015-12-03 14:54:16 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'products.spiders', 'FEED_URI': 'calinxautomat.csv', 'LOG_LEVEL': 'INFO', 'DUPEFILTER_CLASS': 'scrapy.dupefilter.RFPDupeFilter', 'SPIDER_MODULES': ['products.spiders'], 'DEFAULT_ITEM_CLASS': 'products.items.Subcategories', 'FEED_FORMAT': 'csv'} 
2015-12-03 14:54:21 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState 
2015-12-03 14:54:23 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2015-12-03 14:54:23 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2015-12-03 14:54:23 [scrapy] INFO: Enabled item pipelines: myWriteToCsv 
2015-12-03 14:54:23 [root] INFO: before crawler starts... 
2015-12-03 14:54:23 [scrapy] INFO: Spider opened 
2015-12-03 14:54:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2015-12-03 14:54:25 [root] INFO: do login... 
2015-12-03 14:54:26 [scrapy] INFO: Closing spider (finished) 
2015-12-03 14:54:26 [scrapy] INFO: Dumping Scrapy stats: 

...

这一个看起来如此不获得通过登录阶段...这就像回调不退出formRequest ... 我做错了什么?

回答

0

parse_products()start_urls的分配将使用该例程的局部变量,而不是您在蜘蛛顶部设置的全局类。无论如何,我不认为分配给start_urls会做你想要的,scrapy不会注意到然后解析它们。你需要做的是排队新的网址进行解析。

for url in f.readlines() 
    yield Request(url.strip(), callback=self.parse_products) 

更新:从您的更新:scrapy有一个URL过滤器,因此它不会重新访问页面。请参阅this,tldr:在FormRequest中设置dont_filter=True

+0

我得到使用该代码的URL列表。我确定。我会尝试你的建议并回复给你。我还发现了一种新的方法,所以请查看我编辑的问题,并让我知道你的想法... – user1137313

+0

更新了我的答案 – Steve