我试图使用Python登录到一个网站,并从多个网页中收集信息,我得到以下错误:如何避免HTTP错误429(太多请求)蟒蛇
Traceback (most recent call last): File "extract_test.py", line 43, in <module> response=br.open(v) File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in open return self._mech_open(url, data, timeout=timeout) File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in _mech_open raise response mechanize._response.httperror_seek_wrapper: HTTP Error 429: Unknown Response Code
我用time.sleep()
和它的工作原理,但它似乎不智能和不可靠,是否有任何其他方式来躲避这个错误?
这里是我的代码:
import mechanize
import cookielib
import re
first=("example.com/page1")
second=("example.com/page2")
third=("example.com/page3")
fourth=("example.com/page4")
## I have seven URL's I want to open
urls_list=[first,second,third,fourth]
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# Log in credentials
br.open("example.com")
br.select_form(nr=0)
br["username"] = "username"
br["password"] = "password"
br.submit()
for url in urls_list:
br.open(url)
print re.findall("Some String")
有没有办法解决它,这是对服务器 - 执法侧面跟踪您制作多少个请求/时间单位。如果你超过这个单位,你会被暂时封锁。有些服务器在标题中发送这些信息,但这些情况很少见。 检查从服务器收到的标题,使用可用的信息..如果不是,请检查您能够多快地敲打而不会被抓到并使用“睡眠”。 – Torxed
http://stackoverflow.com/questions/15648272/how-do-you-view-the-request-headers-that-mechanize-is-using – Torxed