2016-06-12 69 views
6

我有两个脚本,scraper.py和db_control.py。在scraper.py我有这样的事情:Aiohttp,Asyncio:RuntimeError:事件循环已关闭

... 
def scrap(category, field, pages, search, use_proxy, proxy_file): 
    ... 
    loop = asyncio.get_event_loop() 

    to_do = [ get_pages(url, params, conngen) for url in urls ] 
    wait_coro = asyncio.wait(to_do) 
    res, _ = loop.run_until_complete(wait_coro) 
    ... 
    loop.close() 

    return [ x.result() for x in res ] 

... 

而且在db_control.py:

from scraper import scrap 
... 
while new < 15: 
    data = scrap(category, field, pages, search, use_proxy, proxy_file) 
    ... 
... 

理论上,刮板应启动不明倍,直到有足够的数据已经获得。但是,当new不imidiatelly > 15则出现此错误:

File "/usr/lib/python3.4/asyncio/base_events.py", line 293, in run_until_complete 
self._check_closed() 
    File "/usr/lib/python3.4/asyncio/base_events.py", line 265, in _check_closed 
raise RuntimeError('Event loop is closed') 
RuntimeError: Event loop is closed 

但是,如果我跑废料()只有一次脚本工作得很好。所以我猜想在重新创建loop = asyncio.get_event_loop()时出现了一些问题,我尝试过this,但没有任何变化。我如何解决这个问题?当然这些只是我的代码的片段,如果你认为问题可以在其他地方,完整的代码可用here

回答

7

方法run_until_completerun_foreverrun_in_executorcreate_taskcall_at明确检查 循环,如果它的封闭抛出异常。从文档

报价 - BaseEvenLoop.close

This is idempotent and irreversible


除非你有一些(好)的原因,你可以简单地忽略关闭管线:

def scrap(category, field, pages, search, use_proxy, proxy_file): 
    #... 
    loop = asyncio.get_event_loop() 

    to_do = [ get_pages(url, params, conngen) for url in urls ] 
    wait_coro = asyncio.wait(to_do) 
    res, _ = loop.run_until_complete(wait_coro) 
    #... 
    # loop.close() 
    return [ x.result() for x in res ] 

如果你想拥有各一次全新的循环,您不必手动创建并设置为默认值:

def scrap(category, field, pages, search, use_proxy, proxy_file): 
    #... 
    loop = asyncio.new_event_loop() 
    asyncio.set_event_loop(loop)  
    to_do = [ get_pages(url, params, conngen) for url in urls ] 
    wait_coro = asyncio.wait(to_do) 
    res, _ = loop.run_until_complete(wait_coro) 
    #... 
    return [ x.result() for x in res ] 
+0

谢谢!像现在的魅力一样:) –