2016-08-19 74 views
0

我想下载所有的csv文件,任何想法我如何做到这一点?从URL下载所有csv文件

from bs4 import BeautifulSoup 
import requests 
url = requests.get('http://www.football-data.co.uk/englandm.php').text 
soup = BeautifulSoup(url) 
for link in soup.findAll("a"): 
    print link.get("href") 
+0

你的意思是你想下载所有的csv文件,从一个页面链接?我认为遍历所有链接并检查文件扩展名不是一个坏主意。 – martijnn2008

回答

0

像这样的东西应该工作:

from bs4 import BeautifulSoup 
from time import sleep 
import requests 


if __name__ == '__main__': 
    url = requests.get('http://www.football-data.co.uk/englandm.php').text 
    soup = BeautifulSoup(url) 
    for link in soup.findAll("a"): 
     current_link = link.get("href") 
     if current_link.endswith('csv'): 
      print('Found CSV: ' + current_link) 
      print('Downloading %s' % current_link) 
      sleep(10) 
      response = requests.get('http://www.football-data.co.uk/%s' % current_link, stream=True) 
      fn = current_link.split('/')[0] + '_' + current_link.split('/')[1] + '_' + current_link.split('/')[2] 
      with open(fn, "wb") as handle: 
       for data in response.iter_content(): 
        handle.write(data) 
0

你只需要过滤的HREFs您可以用CSS选择一个做 [HREF $ = CSV]这会发现href的结尾在.csv然后加入到每个基址,请求并最终写入内容:

from bs4 import BeautifulSoup 
import requests 
from urlparse import urljoin 
from os.path import basename 

base = "http://www.football-data.co.uk/" 
url = requests.get('http://www.football-data.co.uk/englandm.php').text 
soup = BeautifulSoup(url) 
for link in (urljoin(base, a["href"]) for a in soup.select("a[href$=.csv]")): 
    with open(basename(link), "w") as f: 
     f.writelines(requests.get(link)) 

这将给你五个文件,E0.csv,E1.csv,E2.csv,E3.csv,E4.csv与所有的数据里面。