所以我有以下代码验证某些网址是正确的,我只需要200响应所以我做了一个脚本正常工作,但它的速度太慢(:urllib2.Request检查URL可达
import urllib2
import string
def my_range(start, end, step):
while start <= end:
yield start
start += step
url = 'http://exemple.com/test/'
y = 1
for x in my_range(1, 5, 1):
y =y+1
url+=str(y)
print url
req = urllib2.Request(url)
try:
resp = urllib2.urlopen(req)
except urllib2.URLError, e:
if e.code == 404:
print "404"
else:
print "not 404"
else:
print "200"
url = 'http://exemple.com/test/'
body = resp.read()
在这个例子中我假设我有以下目录在我的本地主机与这导致
http://exemple.com/test/2
200
http://exemple.com/test/3
200
http://exemple.com/test/4
404
http://exemple.com/test/5
404
http://exemple.com/test/6
404
所以我搜索如何做到这一点更快,我发现这个代码:
import urllib2
request = urllib2.Request('http://www.google.com/')
response = urllib2.urlopen(request)
if response.getcode() == 200:
print "200"
它似乎更快,但是当我有404像(http://www.google.com/111) 测试它给了我这样的结果:
Traceback (most recent call last):
File "C:\Python27\res.py", line 3, in <module>
response = urllib2.urlopen(request)
File "C:\Python27\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 400, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 513, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 438, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 372, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 521, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 404: Not Found
任何想法的家伙? 并非常感谢您的帮助:)
为什么不使用try/except语句?这应该可以解决问题。另见:http://stackoverflow.com/questions/1947133/urllib2-urlopen-vs-urllib-urlopen-urllib2-throws-404-while-urllib-works-w – oliver13
我开始学习蟒蛇5个小时前我只有大声笑在其他语言有一点经验一些解释可能会有所帮助,谢谢很多:) – Ez0r