2016-11-28 25 views
0

打开一些参考我的网址https://cars.mail.ru/reviews/renault/?year=2010-2016 和我应该开在那里的Python:从URL

https://cars.mail.ru/reviews/renault/sandero_stepway/2015/143355/ 
https://cars.mail.ru/reviews/renault/sandero/2015/147850/ 
https://cars.mail.ru/reviews/renault/sandero/2012/147529/ 
https://cars.mail.ru/reviews/renault/duster/2014/147433/ 
https://cars.mail.ru/reviews/renault/logan/2011/146991/ 
https://cars.mail.ru/reviews/renault/duster/2015/146645/ 

我需要打开所有的链接和旁边的一个页面,并有开放所有链接。 我该如何快速做到这一点? 如果我使用

models = ['11', '12', '14', '15', '16', '17', '18', '19', '20', '21', '25', '30', '4', '5', '6', '9', 
    'avantime', 'clio', 'clio_rs', 'duster', 'espace', 'estafette', 'express', 'fluence', 
    'fuego', 'grand_espace', 'grand_scenic', 'kangoo', 'kaptur', 'koleos', 'laguna', 'latitude', 
    'logan', 'mascott', 'master', 'megane', 'megane_rs', 'modus', 'safrane', 'sandero', 'sandero_stepway', 
    'scenic', 'symbol', 'trafic', 'twingo', 'vel_satis'] 
years = ['2010', '2011', '2012', '2013', '2014', '2015', '2016'] 
pattern = 'https://cars.mail.ru/reviews/renault/' 

for model in models: 
    for year in years: 
     for i in range(143350, 143360): 
      res = pattern + model + '/' + year + '/' + str(i) 
      try: 
       page = urllib2.urlopen(res).read() 
       print page 
       soup = BeautifulSoup(page, 'html.parser') 
      except: 
       continue 

它需要这么多时间

回答

0

你正在做

len(models) * len(years) * (143360 - 143350) # 3220 

HTTP请求。如果每个人只需要一秒钟,你就会忙了近一个小时。您可以尝试multiprocessing

+0

它测试的例子。我有'我在范围内(0,150000):'在实际数据中 –

+0

更糟。这是483,00,000个请求。 Python代码无法修复,您将不得不减少请求的数量。 – 2016-11-28 13:13:19