0
我的代码如下:蟒蛇没抓HTTPError
import json
import urllib2
from urllib2 import HTTPError
def karma_reddit(user):
while True:
try:
url = "https://www.reddit.com/user/" + str(user) + ".json"
data = json.load(urllib2.urlopen(url))
except urllib2.HTTPError as err:
if err == "Too Many Requests":
continue
if err == "Not Found":
print str(user) + " isn't a valid username."
else:
raise
break
我试图让从reddit的用户配置文件中的数据。但是HTTPErrors持续发生。当试图使用except语句来捕捉它们时,它们会继续出现,而程序不会执行循环或打印语句的另一个迭代。我如何设法捕捉HTTPErrors?我对Python很新,所以这可能是一个新手的错误。谢谢!
感谢Padraic工作! – cpat