2017-06-03 106 views
1

你好,我是一个蟒蛇新手,不好意思问这样一个具体的问题,当我不知道什么是错误的。新闻网站爬行不起作用?

我试图从一个韩国新网站上爬取新闻文章。 当我运行这段代码

import sys 
    from bs4 import BeautifulSoup 
    import urllib.request 
    from urllib.parse import quote 

    target_url_b4_pn="http://news.donga.com/search?p=" 
    target_url_b4_keyword='&query=' 

target_url_rest="&check_news1&more=1&sorting1&search_date1&v1=&v2=&range=1" 



    def get_text(URL, output_file): 
     source_code_from_URL=urllib.request.urlopen(URL) 
     soup=BeautifulSoup(source_code_from_URL, 'lxml', from_encoding='UTF-8') 
     content_of_article=soup.select('div.article') 
     for item in content_of_article: 
      string_item=str(item.find_all(text=True)) 
      output_file.write(string_item) 

    def get_link_from_news_title(page_num, URL, output_file): 
     for i in range(page_num): 
      current_page_num=1+i*15 
      position=URL.index('=') 
        URL_with_page_num=URL[:position+1]+str(current_page_num)+URL[position+1:] 
      source_code_from_URL=urllib.request.urlopen(URL_with_page_num) 
      soup=BeautifulSoup(source_code_from_URL, 'lxml',from_encoding='UTF-8') 

      for title in soup.find_all('p','tit'): 
       title_link=title.select('a') 
       article_URL=title_link[0]['href'] 
       get_text(article_URL, output_file) 

    def main(): 
     keyword="노무현" 
     page_num=1 
     output_file_name="output.txt" 
     target_url=target_url_b4_pn+target_url_b4_keyword+quote(keyword)+target_url_rest 
     output_file=open(output_file_name, "w", -1, "utf-8") 
     get_link_from_news_title(page_num, target_url, output_file) 
     output_file.close() 


    if __name__=='__main__': 
     main() 
    print(target_url) 
    print(11111) 

的jupyter笔记本不给输入作出响应,在底部犯规甚至吐出任何简单的命令(不打印任何东西)

想想代码冻结不知何故,请告诉我哪里可能出错?

the picture where it's not responding

回答

0
  1. get_text函数的第一行,urllib.request.urlopen(URL)意味着你打开URL,但就像你打开一个文件,你必须read它。
    因此在它后面添加一个read()
    urllib.request.urlopen(URL).read()否则beautifulsoup将无法识别它。

  2. 并在您的css选择器soup.select('div.article'),页面中没有这样的元素,我猜你想要的是soup.select('div.article_txt'),它匹配文章的段落。

  3. print(target_url)应该进入你main功能,target_url只在main定义。

代码

import sys 
from bs4 import BeautifulSoup 
import urllib.request 
from urllib.parse import quote 

target_url_b4_pn="http://news.donga.com/search?p=" 
target_url_b4_keyword='&query=' 

target_url_rest="&check_news1&more=1&sorting1&search_date1&v1=&v2=&range=1" 



def get_text(URL, output_file): 
    source_code_from_URL=urllib.request.urlopen(URL) 
    soup=BeautifulSoup(source_code_from_URL, 'lxml', from_encoding='UTF-8') 
    # change your css selector so it match some element 
    content_of_article=soup.select('div.article_txt') 
    for item in content_of_article: 
     string_item=item.find_all(text=True) 
     #write string to file 
     output_file.write(" ".join(string_item)) 

def get_link_from_news_title(page_num, URL, output_file): 
    for i in range(page_num): 
     current_page_num=1+i*15 
     position=URL.index('=') 
     URL_with_page_num=URL[:position+1]+str(current_page_num)+URL[position+1:] 
     source_code_from_URL=urllib.request.urlopen(URL_with_page_num) 
     soup=BeautifulSoup(source_code_from_URL, 'lxml',from_encoding='UTF-8') 

     for title in soup.find_all('p','tit'): 
      title_link=title.select('a') 
      article_URL=title_link[0]['href'] 
      get_text(article_URL, output_file) 

def main(): 
    keyword="노무현" 
    page_num=1 
    output_file_name="output.txt" 
    target_url=target_url_b4_pn+target_url_b4_keyword+quote(keyword)+target_url_rest 
    # move `target_url` here 
    print(target_url) 

    output_file=open(output_file_name, "w", -1, "utf-8") 
    get_link_from_news_title(page_num, target_url, output_file) 
    output_file.close() 


if __name__=='__main__': 
    main() 
    print(11111) 
+0

太谢谢你了!你真好!我改变了代码,但它仍然没有在一小时内吐出结果......如果你能提出任何改进建议,这将是很好的。 –

+0

'output.txt'的结果会在你的当前目录中,你检查那个文件吗? – Aaron