2012-07-02 33 views
1

我想解析下面的HTML页面使用BeautifulSoup(我要解析大量的页面)。什么是用python解析非订购的HTML页面的最佳方法?

我需要保存每个页面中的所有字段,但它们可以动态更改(在不同的页面上)。

这里是一个页面的例子 - Page 1 和页面与不同领域的订单 - Page 2

我已经写了下面的代码来解析页面。

import requests 
from bs4 import BeautifulSoup 

PTiD = 7680560 

url = "http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/PTO/srchnum.htm&r=1&f=G&l=50&s1=" + str(PTiD) + ".PN.&OS=PN/" + str(PTiD) + "&RS=PN/" + str(PTiD) 

res = requests.get(url, prefetch = True) 

raw_html = res.content 

print "Parser Started.. " 

bs_html = BeautifulSoup(raw_html, "lxml") 

#Initialize all the Search Lists 
fonts = bs_html.find_all('font') 
para = bs_html.find_all('p') 
bs_text = bs_html.find_all(text=True) 
onlytext = [x for x in bs_text if x != '\n' and x != ' '] 

#Initialize the Indexes 
AppNumIndex = onlytext.index('Appl. No.:\n') 
FiledIndex = onlytext.index('Filed:\n ') 
InventorsIndex = onlytext.index('Inventors: ') 
AssigneeIndex = onlytext.index('Assignee:') 
ClaimsIndex = onlytext.index('Claims') 
DescriptionIndex = onlytext.index(' Description') 
CurrentUSClassIndex = onlytext.index('Current U.S. Class:') 
CurrentIntClassIndex = onlytext.index('Current International Class: ') 
PrimaryExaminerIndex = onlytext.index('Primary Examiner:') 
AttorneyOrAgentIndex = onlytext.index('Attorney, Agent or Firm:') 
RefByIndex = onlytext.index('[Referenced By]') 

#~~Title~~ 
for a in fonts: 
     if a.has_key('size') and a['size'] == '+1': 
       d_title = a.string 
print "title: " + d_title 

#~~Abstract~~~ 
d_abstract = para[0].string 
print "abstract: " + d_abstract 

#~~Assignee Name~~ 
d_assigneeName = onlytext[AssigneeIndex +1] 
print "as name: " + d_assigneeName 

#~~Application number~~ 
d_appNum = onlytext[AppNumIndex + 1] 
print "ap num: " + d_appNum 

#~~Application date~~ 
d_appDate = onlytext[FiledIndex + 1] 
print "ap date: " + d_appDate 

#~~ Patent Number~~ 
d_PatNum = onlytext[0].split(':')[1].strip() 
print "patnum: " + d_PatNum 

#~~Issue Date~~ 
d_IssueDate = onlytext[10].strip('\n') 
print "issue date: " + d_IssueDate 

#~~Inventors Name~~ 
d_InventorsName = '' 
for x in range(InventorsIndex+1, AssigneeIndex, 2): 
    d_InventorsName += onlytext[x] 
print "inv name: " + d_InventorsName 

#~~Inventors City~~ 
d_InventorsCity = '' 

for x in range(InventorsIndex+2, AssigneeIndex, 2): 
    d_InventorsCity += onlytext[x].split(',')[0].strip().strip('(') 

d_InventorsCity = d_InventorsCity.strip(',').strip().strip(')') 
print "inv city: " + d_InventorsCity 

#~~Inventors State~~ 
d_InventorsState = '' 
for x in range(InventorsIndex+2, AssigneeIndex, 2): 
    d_InventorsState += onlytext[x].split(',')[1].strip(')').strip() + ',' 

d_InventorsState = d_InventorsState.strip(',').strip() 
print "inv state: " + d_InventorsState 

#~~ Asignee City ~~ 
d_AssigneeCity = onlytext[AssigneeIndex + 2].split(',')[1].strip().strip('\n').strip(')') 
print "asign city: " + d_AssigneeCity 

#~~ Assignee State~~ 
d_AssigneeState = onlytext[AssigneeIndex + 2].split(',')[0].strip('\n').strip().strip('(') 
print "asign state: " + d_AssigneeState 

#~~Current US Class~~ 
d_CurrentUSClass = '' 

for x in range (CuurentUSClassIndex + 1, CurrentIntClassIndex): 
    d_CurrentUSClass += onlytext[x] 
print "cur us class: " + d_CurrentUSClass 

#~~ Current Int Class~~ 
d_CurrentIntlClass = onlytext[CurrentIntClassIndex +1] 
print "cur intl class: " + d_CurrentIntlClass 

#~~~Primary Examiner~~~ 
d_PrimaryExaminer = onlytext[PrimaryExaminerIndex +1] 
print "prim ex: " + d_PrimaryExaminer 

#~~d_AttorneyOrAgent~~ 
d_AttorneyOrAgent = onlytext[AttorneyOrAgentIndex +1] 
print "agent: " + d_AttorneyOrAgent 

#~~ Referenced by ~~ 
for x in range(RefByIndex + 2, RefByIndex + 400): 
    if (('Foreign' in onlytext[x]) or ('Primary' in onlytext[x])): 
     break 
    else: 
     d_ReferencedBy += onlytext[x] 
print "ref by: " + d_ReferencedBy 

#~~Claims~~ 
d_Claims = '' 

for x in range(ClaimsIndex , DescriptionIndex): 
    d_Claims += onlytext[x] 
print "claims: " + d_Claims 

我将页面中的所有文本插入列表(使用BeautifulSoup的find_all(text = True))。然后我尝试查找字段名称的索引,并从该位置遍历列表,并将成员保存为字符串,直到到达下一个字段索引。

当我尝试了几个不同页面上的代码时,我注意到成员的结构正在发生变化,我无法在列表中找到它们的索引。例如,我搜索'123'的索引,并在它显示在列表中的某些页面上显示为'12','3'。

你能想到任何其他方式来解析通用的页面吗?

感谢。

+0

模式,我已更新我的文章 – pinkdawn

回答

0

如果使用beautifulsoup,并有DOM <p>123</p>find_all(text=True)你将有['123']

但是,如果你有DOM <p>12<b>3</b></p>,它们具有相同的语义以前,但beautifulsoup会给你['12','3']

也许您只需找到完整的标签即可完成['123'],并首先忽略/清除该标签。

如何消除< B>标签

import re 
html='<p>12<b>3</b></p>' 
reExp='<[\/\!]?b[^<>]*?>' 
print re.sub(reExp,'',html) 

的模式一些假的代码,你可以这样做:

import re 
patterns = '<TD align=center>(?P<VALUES_TO_FIND>.*?)<\/TD>' 
print re.findall(patterns, your_html) 
+0

以及模式?如果我想通过搜索前后查找内容。例如,如果我有HTML代码:的再发行: ** ** VALUES_TO_FIND


,我知道确保** VALUES_TO_FIND **之前和之后的代码始终是相同的。我怎样才能找到它使用RE?谢谢。 – Rgo

+0

@Rgo我已更新主帖子,供您参考 – pinkdawn

相关问题