2017-01-01 82 views
1

名录的HREF我最近发布的要求,以刮去名录和@alecxe数据帮助了吨在我面前展现了一些新的方法来提取数据,但我坚持再次,想凑数据为每个链接在黄页,所以我可以得到有更多数据的黄页页面。我想添加一个名为“url”的变量,并获取业务的href,而不是实际的业务网站,而是业务的黄页页面。我尝试了各种各样的东西,但似乎没有任何工作。 href在“class = business-name”之下。抓取与蟒蛇

import csv 
import requests 
from bs4 import BeautifulSoup 


with open('cities_louisiana.csv','r') as cities: 
    lines = cities.read().splitlines() 
cities.close() 

for city in lines: 
    print(city) 
url = "http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms="baton%rouge+LA&page="+str(count) 

for city in lines: 
    for x in range (0, 50): 
     print("http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms=baton%rouge+LA&page="+str(x)) 
     page = requests.get("http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms=baton%rouge+LA&page="+str(x)) 
     soup = BeautifulSoup(page.text, "html.parser") 
     for result in soup.select(".search-results .result"): 
      try: 
       name = result.select_one(".business-name").get_text(strip=True, separator=" ") 
      except: 
       pass 
      try: 
       streetAddress = result.select_one(".street-address").get_text(strip=True, separator=" ") 
      except: 
       pass 
      try: 
       city = result.select_one(".locality").get_text(strip=True, separator=" ") 
       city = city.replace(",", "") 
       state = "LA" 
       zip = result.select_one('span[itemprop$="postalCode"]').get_text(strip=True, separator=" ") 
      except: 
       pass 

      try: 
       telephone = result.select_one(".phones").get_text(strip=True, separator=" ") 
      except: 
       telephone = "No Telephone" 
      try: 
       categories = result.select_one(".categories").get_text(strip=True, separator=" ") 
      except: 
       categories = "No Categories" 
      completeData = name, streetAddress, city, state, zip, telephone, categories 
      print(completeData) 
      with open("yellowpages_businesses_louisiana.csv", "a", newline="") as write: 
       wrt = csv.writer(write) 
       wrt.writerow(completeData) 
       write.close() 

回答

1

多事情,你应该实现:

  • 提取元素的href属性的业务联系与business-name类 - 在BeautifulSoup这可以通过“治疗”的元素像一本字典来完成
  • 使链接绝对使用urljoin()
  • 向商家页面发出请求,同时维持网络抓取会话
  • 解析与BeautifulSoup商业版以及和提取所需信息
  • 添加时间延迟,以避免击中部位往往
  • 从搜索结果页面和打印出企业名称

完整的工作示例从业务概况页业务描述:

from urllib.parse import urljoin 

import requests 
import time 
from bs4 import BeautifulSoup 


url = "http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms=baton%rouge+LA&page=1" 


with requests.Session() as session: 
    session.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'} 

    page = session.get(url) 
    soup = BeautifulSoup(page.text, "html.parser") 
    for result in soup.select(".search-results .result"): 
     business_name_element = result.select_one(".business-name") 
     name = business_name_element.get_text(strip=True, separator=" ") 

     link = urljoin(page.url, business_name_element["href"]) 

     # extract additional business information 
     business_page = session.get(link) 
     business_soup = BeautifulSoup(business_page.text, "html.parser") 
     description = business_soup.select_one("dd.description").text 

     print(name, description) 

     time.sleep(1) # time delay to not hit the site too often 
+1

很不错的!我仍然对Python和编程一般都很陌生。尽管我只是通过添加'business_name_element = result.select_one(“。business-name”)'和link = urljoin(page.url,business_name_element [“href”])''做了一些小改动,您的解决方案仍然很棒。当我阅读你的代码时,我会对其进行逆向工程,这样才有意义。感谢您的支持! –