2017-02-19 46 views
1

我已经将Python作为第一语言学习了几个月,并且正在尝试构建一个网页抓取工具,而不是依赖我给它的网址抓取网站为我获得网址。生成一个列表,以将网址提供给网页抓取工具

我已经确定该网站的哪些部分包含我需要的网址,并且知道/认为我需要2个列表来完成我想要的操作。

第一个是城市的网址列表,第二个是这些网站中单元的网址列表。这是我最终想要迭代并从中获取数据的单元的url。到目前为止,我有以下代码:

def get_cities(): 
    city_sauce = urllib.request.urlopen('the_url') 
    city_soup = BeautifulSoup(city_sauce, 'html.parser') 
    the_city_links = [] 
    for city in city_soup.findAll('div', class_="city-location-menu"): 
     for a in city.findAll('a', href=True, text=True): 
       the_city_links.append('first_half_of_url' + a['href']) 
    return the_city_links 

当我打印出来就说明我需要的所有网址,所以我觉得我已经成功地创建了此链接列表?

第二部分如下:

def get_units(): 
    for theLinks in get_cities(): 
     unit_sauce = urllib.request.urlopen(theLinks) 
     unit_soup = BeautifulSoup(unit_sauce, 'html.parser') 
     the_unit_links = [] 
     for unit in unit_soup.findAll('div', class_="btn white-green icon-right-open-big"): 
      for aa in unit.findAll('a', href=True, text=True): 
       the_unit_links.append(aa) 
     return the_unit_links 

印刷当此只是简单地返回[]。我不确定我哪里出错,任何帮助将不胜感激!

第2部分修订:

def get_units(): 
    for the_city_links in get_cities(): 
     unit_sauce = urllib.request.urlopen(the_city_links) 
     unit_soup = BeautifulSoup(unit_sauce, 'html.parser') 
     the_unit_links = [] 
     for unit in unit_soup.findAll('div', class_="btn white-green icon-right-open-big"): 
      for aa in unit.findAll('a', href=True, text=True): 
       the_unit_links.append(aa) 
     return the_unit_links 
+1

您需要提供哪些链接是你想获取?可能是你错过了拿东西,或者可能是你拿错了课。 –

+0

我把这个url放在'city_sauce'中,我希望''unit_sauce''将这些链接存储在一个列表中,将它们解析为'unit_soup',然后进入每个链接并获取hrefs 'div',class _ =“btn white-green icon-right-open-big”,然后将它们添加到'the_unit_links'列表中,然后在我的scraper中迭代。有任何想法吗? @ PiyushS.Wanare我稍微修改了第二部分,参见修订。 – Maverick

+0

如果你把数据放在一个函数中,它会更好。 –

回答

1
# Crawls main site to get a list of city URLs 
def getCityLinks(): 
    city_sauce = urllib.request.urlopen('the_url') 
    city_soup = BeautifulSoup(city_sauce, 'html.parser') 
    the_city_links = [] 

    for city in city_soup.findAll('div', class_="city-location-menu"): 
     for a in city.findAll('a', href=True, text=True): 
      the_city_links.append('the_url' + a['href']) 
    #print(the_city_links) 
    return the_city_links 

# Crawls each of the city web pages to get a list of unit URLs 
def getUnitLinks(): 
    getCityLinks() 
    for the_city_links in getCityLinks(): 
     unit_sauce = urllib.request.urlopen(the_city_links) 
     unit_soup = BeautifulSoup(unit_sauce, 'html.parser') 
     the_unit_links = [] 
     for unit_href in unit_soup.findAll('a', class_="btn white-green icon-right-open-big", href=True): 
      the_unit_links.append('the_url' + unit_href['href']) 
     yield the_unit_links 
2

假设我理解你如何使用这个 - 你的函数将在get_cities()的第一个链接后返回,这可能没有单位?我想你需要在函数的开始处设置the_unit_links = [],然后将函数的返回行移入一个缩进 - 所以只有当get_cities中的所有链接都被删除后才会返回。

+0

谢谢你的建议,不幸的是,返回[]以及! – Maverick

1
def getLinks(): 
    city_sauce = urllib.request.urlopen('the_url') 
    city_soup = BeautifulSoup(city_sauce, 'html.parser') 
    the_city_links = [] 

    for city in city_soup.findAll('div', class_="city-location-menu"): 
      for a in city.findAll('a', href=True, text=True): 
        the_city_links.append('first_half_of_url' + a['href']) 
     #return the_city_links 

    # print the_city_links 

    for the_city_links in the_city_links: 
     unit_sauce = urllib.request.urlopen(the_city_links) 
     unit_soup = BeautifulSoup(unit_sauce, 'html.parser') 
     the_unit_links = [] 
     for unit in unit_soup.findAll('div', class_="btn white-green icon-right-open-big"): 
      for aa in unit.findAll('a', href=True, text=True): 
       the_unit_links.append(aa) 
     return the_unit_links 

注: -Print the_city_links和检查您是否获得预期的输出,然后在运行另一个循环,以取回其相应unit_links