10
例如,如何捕获python和urllib(2)中页面的404和403错误?捕获http错误
有没有快速的方法没有大的类包装?
添加信息(堆栈跟踪):
Traceback (most recent call last):
File "test.py", line 3, in <module>
page = urllib2.urlopen("http://localhost:4444")
File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
File "/usr/lib/python2.6/urllib2.py", line 391, in open
response = self._open(req, data)
File "/usr/lib/python2.6/urllib2.py", line 409, in _open
'_open', req)
File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
File "/usr/lib/python2.6/urllib2.py", line 1161, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.6/urllib2.py", line 1136, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 111] Connection refused>
这适用于403,而不是404'urllib2.URLError:<错误的urlopen [错误111]拒绝连接>' – Ockonal 2010-07-15 14:41:29
这是因为它不是你看到一个404错误。错误消息显示“连接被拒绝” - 不是“找不到页面”。 – 2010-07-15 14:50:51
是的,你是对的。但在它之前还有很长的异常追踪。所以这是没有被发现的例外,对吧?主要问题是捕捉确切的错误。你能帮忙吗? – Ockonal 2010-07-15 14:56:18