2016-04-21 38 views
1

我试图解析一个HTML文档,找到使用Beautiful Soup的链接,发现一个奇怪的行为。该页面是http://people.csail.mit.edu/gjtucker/。这里是我的代码:美丽的汤分析器无法找到链接

from bs4 import BeautifulSoup 
import requests 

user_agent = {'User-agent': 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17'} 

t=requests.get(url, headers = user_agent).text 

soup=BeautifulSoup(t, 'html.parser') 
for link in soup.findAll('a'): 
    print link['href'] 

这将打印两个环节:http://www.amazon.jobs/team/speech-amazonhttps://scholar.google.com/citations?user=-gJkPHIAAAAJ&hl=en,而显然有页面有更多的联系。

任何人都可以重现吗?这个网址发生这种情况的具体原因是什么?几个outher urls工作得很好。

回答

0

页面的HTML是格式不正确,你应该使用一个more lenient parser,像html5lib

soup = BeautifulSoup(t, 'html5lib') 
for link in soup.find_all('a'): 
    print(link['href']) 

打印:

http://www.amazon.jobs/team/speech-amazon 
https://scholar.google.com/citations?user=-gJkPHIAAAAJ&hl=en 
http://www.linkedin.com/pub/george-tucker/6/608/3ba 
... 
http://www.hsph.harvard.edu/alkes-price/ 
... 
http://www.nature.com/ng/journal/v47/n3/full/ng.3190.html 
http://www.biomedcentral.com/1471-2105/14/299 
pdfs/journal.pone.0029095.pdf 
pdfs/es201187u.pdf 
pdfs/sigtrans.pdf