2017-10-20 97 views
1

我刮一个XML站点地图包含特殊字符,如é,导致scrapy:有特殊字符处理的URL

ERROR: Spider error processing <GET [URL with '%C3%A9' instead of 'é']> 

我如何获得Scrapy保持原来的网址不变,即用它的特殊性格?

Scrapy == 1.3.3

的Python 3.5.2 == (我需要坚持这些版本)

更新:每https://stackoverflow.com/a/17082272/6170115 正如我能够以正确的得到网址字符使用unquote

用法示例:

>>> from urllib.parse import unquote 
>>> unquote('ros%C3%A9') 
'rosé' 

我也试过我自己的询价uest子类,而safe_url_string但我结束了:

UnicodeEncodeError: 'ascii' codec can't encode character '\xf9' in position 25: ordinal not in range(128) 

完全回溯:

[scrapy.core.scraper] ERROR: Error downloading <GET [URL with characters like ù]> 
Traceback (most recent call last): 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/twisted/internet/defer.py", line 1384, in _inlineCallbacks 
result = result.throwExceptionIntoGenerator(g) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/twisted/python/failure.py", line 393, in throwExceptionIntoGenerator 
return g.throw(self.type, self.value, self.tb) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request 
defer.returnValue((yield download_func(request=request,spider=spider))) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/utils/defer.py", line 45, in mustbe_deferred 
result = f(*args, **kw) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/handlers/__init__.py", line 65, in download_request 
return handler.download_request(request, spider) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/handlers/http11.py", line 61, in download_request 
return agent.download_request(request) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/handlers/http11.py", line 260, in download_request 
agent = self._get_agent(request, timeout) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/handlers/http11.py", line 241, in _get_agent 
scheme = _parse(request.url)[0] 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/webclient.py", line 37, in _parse 
return _parsed_url_args(parsed) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/webclient.py", line 19, in _parsed_url_args 
path = b(path) 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/webclient.py", line 17, in <lambda> 
b = lambda s: to_bytes(s, encoding='ascii') 
    File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/utils/python.py", line 120, in to_bytes 
return text.encode(encoding, errors) 
UnicodeEncodeError: 'ascii' codec can't encode character '\xf9' in position 25: ordinal not in range(128) 

任何提示吗?

+1

请看看我的[回复](https://stackoverflow.com/questions/42445087/force-python-scrapy-not-to-encode-url)类似的问题。也许你可以将这种技术应用到你的用例中。 –

+0

真正的问题是在这里:https://stackoverflow.com/questions/47563095/json-url-sometimes-returns-a-null-response并回答:https://stackoverflow.com/a/47564798/6170115 – happyspace

回答

1

我不认为你可以这样做,因为Scrapy usessafe_url_stringw3lib库存储之前Request的URL。你会以某种方式不得不扭转这种情况。