2013-08-07 25 views
3

我制作了一个网页抓取工具,它可以获取所有链接,直到页面的第一级,并从中获取所有链接和文本以及imagelinks和alt。这里是整个代码:直到我履带达到facebook links一个他无法读取Python,机械化 - 即使在set_handle_robots和add_headers之后,robots.txt也不允许使用

import urllib 
import re 
import time 
from threading import Thread 
import MySQLdb 
import mechanize 
import readability 
from bs4 import BeautifulSoup 
from readability.readability import Document 
import urlparse 

url = ["http://sparkbrowser.com"] 

i=0 

while i<len(url): 

    counterArray = [0] 

    levelLinks = [] 
    linkText = ["homepage"] 
    levelLinks = [] 

    def scraper(root,steps): 
     urls = [root] 
     visited = [root] 
     counter = 0 
     while counter < steps: 
      step_url = scrapeStep(urls) 
      urls = [] 
      for u in step_url: 
       if u not in visited: 
        urls.append(u) 
        visited.append(u) 
        counterArray.append(counter +1) 
      counter +=1 
     levelLinks.append(visited) 
     return visited 

    def scrapeStep(root): 
     result_urls = [] 
     br = mechanize.Browser() 
     br.set_handle_robots(False) 
     br.set_handle_equiv(False) 
     br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')] 

     for url in root: 
      try: 
       br.open(url) 

       for link in br.links(): 
        newurl = urlparse.urljoin(link.base_url, link.url) 
        result_urls.append(newurl) 
        #levelLinks.append(newurl) 
      except: 
       print "error" 
     return result_urls 


    scraperOut = scraper(url[i],1) 

    for sl,ca in zip(scraperOut,counterArray): 
     print "\n\n",sl," Level - ",ca,"\n" 

     #Mechanize 
     br = mechanize.Browser() 
     page = br.open(sl) 
     br.set_handle_robots(False) 
     br.set_handle_equiv(False) 
     br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')] 
     #BeautifulSoup 
     htmlcontent = page.read() 
     soup = BeautifulSoup(htmlcontent) 


     for linkins in br.links(text_regex=re.compile('^((?!IMG).)*$')): 
      newesturl = urlparse.urljoin(linkins.base_url, linkins.url) 
      linkTxt = linkins.text 
      print newesturl,linkTxt 

     for linkwimg in soup.find_all('a', attrs={'href': re.compile("^http://")}): 
      imgSource = linkwimg.find('img') 
      if linkwimg.find('img',alt=True): 
       imgLink = linkwimg['href'] 
       #imageLinks.append(imgLink) 
       imgAlt = linkwimg.img['alt'] 
       #imageAlt.append(imgAlt) 
       print imgLink,imgAlt 
      elif linkwimg.find('img',alt=False): 
       imgLink = linkwimg['href'] 
       #imageLinks.append(imgLink) 
       imgAlt = ['No Alt'] 
       #imageAlt.append(imgAlt) 
       print imgLink,imgAlt 

    i+=1 

一切是伟大的工作,但他给我的错误

httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt

为线68是:page = br.open(sl)

而我现在不是为什么,因为你可以看到,我已经设置了机械化set_handle_robotsadd_headers选项。

我不知道这是为什么,但我注意到,我得到了facebook链接的错误,在这种情况下,facebook.com/sparkbrowser和谷歌。

欢迎任何帮助或建议。

欢呼

+0

可能的重复[为什么机械化抛出HTTP 403错误?](http://stackoverflow.com/questions/17938366/why-is-mechanize-throwing-a-http-403-error) – andrean

+0

我已经('User-Agent','Mozilla/5.0(X11; Linux x86_64)AppleWebKit/537.11(KHTML,如Gecko)Chrome/23.0.1271.64 Safari/537.11'),('Accept','text/('Accept-Charset','ISO-8859-1,utf-8; q = 0.7,*; ('Accept-Encoding','none'),('Accept-Language','en-US,en; q = 0.8'),('Connection','keep-alive') 但我有同样的错误 – dzordz

+0

不...这可能是一个问题...如何设置? – dzordz

回答