2017-04-02 34 views
0

我在学习beautifulsoup,并试图编写一个小脚本在荷兰房地产网站上查找房屋。当我试图让网站的内容,我立即得到一个HTTP405错误:使用urllib获取网站会导致HTTP 405错误

File "funda.py", line 2, in <module> 
    html = urlopen("http://www.funda.nl") 
    File "<folders>request.py", line 223, in urlopen 
    return opener.open(url, data, timeout) 
    File "<folders>request.py", line 532, in open 
    response = meth(req, response) 
    File "<folders>request.py", line 642, in http_response 
    'http', request, response, code, msg, hdrs) 
    File "<folders>request.py", line 570, in error 
    return self._call_chain(*args) 
    File "<folders>request.py", line 504, in _call_chain 
    result = func(*args) 
    File "<folders>request.py", line 650, in http_error_default 
    raise HTTPError(req.full_url, code, msg, hdrs, fp) 
urllib.error.HTTPError: HTTP Error 405: Not Allowed 

什么我尝试执行:

from urllib.request import urlopen 
html = urlopen("http://www.funda.nl") 

知道为什么这是导致HTTP405?我只是在做一个GET请求,对吧?

import urllib 
html = urllib.urlopen("http://www.funda.nl") 

leovp的评论是有道理的:

+2

这绝对是一个GET请求,但你被检测为一个机器人,而这个特定的服务器在这种情况下发送405错误代码。尝试将标题调整为正常浏览器。 – leovp

+0

相关 - https://stackoverflow.com/questions/27652543/how-to-use-python-requests-to-fake-a-browser-visit?noredirect=1&lq=1 –

回答

2

可能的重复HTTPError: HTTP Error 403: Forbidden。你需要假的你是一个普通的访问者。这通常是通过使用通用/常规User-Agent HTTP标头完成的(因站点而异)。

>>> url = "http://www.funda.nl" 
>>> import urllib.request 
>>> req = urllib.request.Request(
...  url, 
...  data=None, 
...  headers={ 
...   'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36' 
...  } 
...) 
>>> f = urllib.request.urlopen(req) 
>>> f.status, f.msg 
(200, 'OK') 

使用requests库 -

>>> import requests 
>>> response = requests.get(
...  url, 
...  headers={ 
...   'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36' 
...  } 
...) 
>>> response.status_code 
200 
-2

它,如果你不使用的要求或工作的urllib2。