2017-01-22 76 views
-2

我有一个链接列表,每个链接都包含多个页面。我发现每个子类别中的页面数量,但现在我想进行循环以遍历子链接的所有页面。因此,链接的第一类将有8页,第二个链接将有6页等等。将列表引用到for循环中

lists = [8, 6, 5, 13, 10, 16, 13, 15, 4, 4, 5, 7, 2, 6, 6, 8, 9, 8, 3, 8, 8, 1, 6, 3, 2, 15, 5, 4, 2, 12, 18, 5, 2] 

import bs4 as bs 
import urllib.request 
import pandas as pd 
import urllib.parse 
import re 


#source = urllib.request.urlopen('https://messageboards.webmd.com/').read() 
source = urllib.request.urlopen('https://messageboards.webmd.com').read() 
soup = bs.BeautifulSoup(source,'lxml') 


df = pd.DataFrame(columns = ['link'],data=[url.a.get('href') for url in soup.find_all('div',class_="link")]) 
lists =[] 
lists2=[] 
lists3=[] 
page_links = [] 


for i in range(0,33): 
    link = (df.link.iloc[i]) 
    req = urllib.request.Request(link) 
    resp = urllib.request.urlopen(req) 
    respData = resp.read() 
    temp1=re.findall(r'Filter by</span>(.*?)data-pagedcontenturl',str(respData)) 
    temp1=re.findall(r'data-totalitems=(.*?)data-pagekey',str(temp1))[0] 
    pageunm=round(int(re.sub("[^0-9]","",temp1))/10) 
    lists.append(pageunm) 


for j in lists: 
    for y in range(1, j+1): 
     url_pages = link + '#pi157388622=' + str(j) 
     page_links.append(url_pages) 
+0

所以每个'j'要遍历'范围内(1,J + 1)'? – jonrsharpe

回答

1

使用嵌套循环:

for i in lists: # [8, 6, 5, etc] 
    # now use i for the inner loop 
    for j in range(1, i+1): # [1-8], [1-6], [1-5], etc 
     url_pages = link + '#pi157388622=' + str(j) 
     # do sth with url_pages, or it'll be just overwritten each iteration 
+0

你能否进一步解释你最后的评论是什么意思?你的意思是将它们输入另一个列表中吗? – Data1234

+0

@ Data1234好吧,语句'url_pages = foo'只给本地变量'url_pages'赋值,但别的什么都不做。在下一次循环迭代中,这个值将会丢失,因为'url_pages'将被重新分配一个新值。因此,你应该做某事。与它一样,打印或将其添加到列表中... – schwobaseggl

+0

它不起作用。虽然我没有解释我的自我权利。 – Data1234