任何帮助将不胜感激,因为我是新的python。我创建了下面的Web爬网程序,但它不抓取所有页面,只有2页。它需要对所有页面进行哪些更改?我BeautifulSoup蜘蛛只爬行2页不是所有的页面
请参阅def trade_spider(max_pages)循环,在底部我有一个应该循环所有页面的trade_spider(18)。
感谢您的帮助。
import csv
import re
import requests
from bs4 import BeautifulSoup
f = open('dataoutput.csv','w', newline= "")
writer = csv.writer(f)
def trade_spider(max_pages):
page = 1
while page <= max_pages:
url = 'http://www.zoopla.co.uk/for-sale/property/nottingham/?price_max=200000&identifier=nottingham&q=Nottingham&search_source=home&radius=0&pn=' + str(page) + '&page_size=100'
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findAll('a', {'class': 'listing-results-price text-price'}):
href = "http://www.zoopla.co.uk" + link.get('href')
title = link.string
get_single_item_data(href)
page += 1
def get_single_item_data(item_url):
source_code = requests.get(item_url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for item_name in soup.findAll('h2', {'itemprop': 'streetAddress'}):
address = item_name.get_text(strip=True)
writer.writerow([address])
trade_spider(18)
是否有错误发生或是否完全退出? 'page'变量是18还是2? –