2015-12-31 64 views
1

我试图从我的项目的网站刮取数据。但问题是我没有在我的输出中获取标签,这是我在开发人员工具栏屏幕上看到的。下面是DOM的从我就想凑数据快照:使用BeautifulSoup刮掉隐藏的元素

<div class="bigContainer"> 
     <!-- ngIf: products.grid_layout.length > 0 --><div ng-if="products.grid_layout.length > 0"> 
     <div class="fl"> 
      <!-- ngRepeat: product in products.grid_layout --><!-- ngIf: $index%3==0 --> 
      <div ng-repeat="product in products.grid_layout" ng-if="$index%3==0" class="GridItems"> 
      <grid-item product="product" gakey="ga_key" idx="$index" ancestors="products.ancestors" is-search-item="isSearchItem" is-filter="isFilter"> 
       <a ng-href="/shop/p/nokia-lumia-930-black-MOBNOKIA-LUMIA-SRI-673652FB190B4?psearch=organic|undefined|lumia 930|grid" ng-click="searchProductTrack(product, idx+1)" tabindex="0" href="/shop/p/nokia-lumia-930-black-MOBNOKIA-LUMIA-SRI-673652FB190B4?psearch=organic|undefined|lumia 930|grid" class="" style=""> 
      </grid-item> 

我能够得到带班“bigContainer” div标签,但我不能给这个标签内刮去标签。例如,如果我想获取网格项标签,我得到一个空的列表,这意味着它显示没有这样的标签。这是为什么发生?请帮忙!!

+0

请分享您迄今为止写的代码。 – JRodDynamite

+0

r = requests.get(url) soup = BeautifulSoup(r.content,“html.parser”) plink = soup.find_all(“div”,{“class”:“f1”})[0] .find_all (“grid-item”)[0] – Nain

+2

检查传递给'BeautifulSoup'的HTML(即'r.content')。它可能与开发人员工具栏显示的HTML不同。如果它缺少''标记,JavaScript可能被用于将内容插入到网页中。如果是这种情况,您需要[支持JavaScript的浏览器(如Selenium)](http://stackoverflow.com/q/17436014/190597)来获取内容。 – unutbu

回答

2

您可以使用底层web-api来提取由angularJS javascript框架呈现的网格项细节,因此HTML不是静态的。

解析的一种方法是使用硒来获取数据,但使用浏览器的开发工具来识别web-api是非常简单的。

编辑:我用Firebug插件与Firefox看到从 “网络标签”

enter image description here

和页面的GET请求作出的GET请求是:

https://catalog.paytm.com/v1//g/electronics/mobile-accessories/mobiles/smart-phones?page_count=1&items_per_page=30&resolution=960x720&quality=high&sort_popular=1&cat_tree=1&callback=angular.callbacks._3&channel=web&version=2

而且它返回了一个回调JS脚本,它几乎完全是JSON数据。

它传回的JSON包含的细节

每个网格项目被形容为像下面一个JSON对象将网格项目:

{ 
     "product_id": 23491960, 
     "complex_product_id": 7287171, 
     "name": "Samsung Galaxy Z1 (Black)", 
     "short_desc": "", 
     "bullet_points": { 
      "salient_feature": ["Screen: 10.16 cm (4\")", "Camera: 3.1 MP Rear/VGA Front", "RAM: 768 MB", "ROM: 4 GB", "Dual-core 1.2 GHz Cortex-A7", "Battery: 1500 mAh/Li-Ion"] 
     }, 
     "url": "https://catalog.paytm.com/v1/p/samsung-z1-black-MOBSAMSUNG-Z1-BSMAR2320696B3C745", 
     "seourl": "https://catalog.paytm.com/v1/p/samsung-z1-black-MOBSAMSUNG-Z1-BSMAR2320696B3C745", 
     "url_type": "product", 
     "promo_text": null, 
     "image_url": "https://assetscdn.paytm.com/images/catalog/product/M/MO/MOBSAMSUNG-Z1-BSMAR2320696B3C745/2.jpg", 
     "vertical_id": 18, 
     "vertical_label": "Mobile", 
     "offer_price": 5090, 
     "actual_price": 5799, 
     "merchant_name": "SMARTBUY", 
     "authorised_merchant": false, 
     "stock": true, 
     "brand": "Samsung", 
     "tag": "+5% Cashback", 
     "product_tag": "+5% Cashback", 
     "shippable": true, 
     "created_at": "2015-09-17T08:28:25.000Z", 
     "updated_at": "2015-12-29T05:55:29.000Z", 
     "img_width": 400, 
     "img_height": 400, 
     "discount": "12" 
    } 

所以,你可以得到的细节,甚至没有在使用beautifulSoup以下方式。

import requests 
import json 

response = requests.get("https://catalog.paytm.com/v1//g/electronics/mobile-accessories/mobiles/smart-phones?page_count=1&items_per_page=30&resolution=960x720&quality=high&sort_popular=1&cat_tree=1&callback=angular.callbacks._3&channel=web&version=2") 
jsonResponse = ((response.text.split('angular.callbacks._3('))[1].split(');')[0]) 
data = json.loads(jsonResponse) 
print(data["grid_layout"]) 
grid_data = data["grid_layout"] 

for grid_item in grid_data: 
    print("Brand:", grid_item["brand"]) 
    print("Product Name:", grid_item["name"]) 
    print("Current Price: Rs", grid_item["offer_price"]) 
    print("==================") 

你会得到输出如下

Brand: Samsung 
Product Name: Samsung Galaxy Z1 (Black) 
Current Price: Rs 4990 
================== 
Brand: Samsung 
Product Name: Samsung Galaxy A7 (Gold) 
Current Price: Rs 22947 
================== 

希望这有助于。

0

您可以使用“用户代理”来获取完整的数据。尝试像这样

Document doc = Jsoup.connect(url).userAgent("Mozilla/5.0 (Windows NT 6.1; WOW64; rv:5.0) Gecko/20100101 Firefox/5.0").timeout(10*1000).get();