2012-04-12 90 views
1

我有一个程序,抓取网址存储在数据库中的内容。我正在使用beautifulsoup,urllib2来抓取内容。当我输出结果时,我发现程序崩溃时(它看起来像)403错误。那么如何防止我的程序崩溃在403/404等错误?蟒蛇,urllib2,在404错误崩溃

相关输出:

Traceback (most recent call last): 
    File "web_content.py", line 29, in <module> 
    grab_text(row) 
    File "web_content.py", line 21, in grab_text 
    f = urllib2.urlopen(row) 
    File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen 
    return _opener.open(url, data, timeout) 
    File "/usr/lib/python2.7/urllib2.py", line 400, in open 
    response = meth(req, response) 
    File "/usr/lib/python2.7/urllib2.py", line 513, in http_response 
    'http', request, response, code, msg, hdrs) 
    File "/usr/lib/python2.7/urllib2.py", line 438, in error 
    return self._call_chain(*args) 
    File "/usr/lib/python2.7/urllib2.py", line 372, in _call_chain 
    result = func(*args) 
    File "/usr/lib/python2.7/urllib2.py", line 521, in http_error_default 
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) 
urllib2.HTTPError: HTTP Error 403: Forbidden 
+1

您可能想要使用例外 – Asterisk 2012-04-12 05:29:14

+0

@Asterisk我明白了。 Python新手。谢谢! – yayu 2012-04-12 05:31:15

回答