2013-03-28 84 views
-2

的代码是:为什么我只是抛出抛出异常?

catch (WebException ex) 
      { 
       failed = true; 
       wccfg.failedUrls++; 
       return csFiles; 
      } 
      catch (Exception ex) 
      { 
       failed = true; 
       wccfg.failedUrls++; 
       throw; 
      } 

唯一的例外是在掷; 异常消息为:的NullReferenceException:未将对象引用设置到对象的实例

System.NullReferenceException was unhandled by user code 
    HResult=-2147467261 
    Message=Object reference not set to an instance of an object. 
    Source=GatherLinks 
    StackTrace: 
     at GatherLinks.TimeOut.getHtmlDocumentWebClient(String url, Boolean useProxy, String proxyIp, Int32 proxyPort, String usename, String password) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\TimeOut.cs:line 55 
     at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\WebCrawler.cs:line 151 
     at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\WebCrawler.cs:line 151 
     at GatherLinks.WebCrawler.webCrawler(String mainUrl, Int32 levels) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\WebCrawler.cs:line 151 
     at GatherLinks.BackgroundWebCrawling.secondryBackGroundWorker_DoWork(Object sender, DoWorkEventArgs e) in d:\C-Sharp\GatherLinks\GatherLinks-2\GatherLinks\GatherLinks\BackgroundWebCrawling.cs:line 82 
     at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) 
     at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) 
    InnerException: 

这是try代码它的WebCrawler的函数内部:

public List<string> webCrawler(string mainUrl, int levels) 
     { 

      busy.WaitOne(); 


      HtmlWeb hw = new HtmlWeb(); 
      List<string> webSites; 
      List<string> csFiles = new List<string>(); 

      csFiles.Add("temp string to know that something is happening in level = " + levels.ToString()); 
      csFiles.Add("current site name in this level is : " + mainUrl); 

      try 
      { 
       HtmlAgilityPack.HtmlDocument doc = TimeOut.getHtmlDocumentWebClient(mainUrl, false, "", 0, "", ""); 
       done = true; 

       Object[] temp_arr = new Object[8]; 
       temp_arr[0] = csFiles; 
       temp_arr[1] = mainUrl; 
       temp_arr[2] = levels; 
       temp_arr[3] = currentCrawlingSite; 
       temp_arr[4] = sitesToCrawl; 
       temp_arr[5] = done; 
       temp_arr[6] = wccfg.failedUrls; 
       temp_arr[7] = failed; 

       OnProgressEvent(temp_arr); 


       currentCrawlingSite.Add(mainUrl); 
       webSites = getLinks(doc); 
       removeDupes(webSites); 
       removeDuplicates(webSites, currentCrawlingSite); 
       removeDuplicates(webSites, sitesToCrawl); 
       if (wccfg.removeext == true) 
       { 
        for (int i = 0; i < webSites.Count; i++) 
        { 
         webSites.Remove(removeExternals(webSites,mainUrl,wccfg.localy)); 
        } 
       } 
       if (wccfg.downloadcontent == true) 
       { 
        retwebcontent.retrieveImages(mainUrl); 
       } 

       if (levels > 0) 
        sitesToCrawl.AddRange(webSites); 



       if (levels == 0) 
       { 
        return csFiles; 
       } 
       else 
       { 


        for (int i = 0; i < webSites.Count(); i++) 
        { 


         if (wccfg.toCancel == true) 
         { 
          return new List<string>(); 
         } 
         string t = webSites[i]; 
         if ((t.StartsWith("http://") == true) || (t.StartsWith("https://") == true)) 
         { 
          csFiles.AddRange(webCrawler(t, levels - 1)); 
         } 

        } 
        return csFiles; 
       } 



      } 

      catch (WebException ex) 
      { 
       failed = true; 
       wccfg.failedUrls++; 
       return csFiles; 
      } 
      catch (Exception ex) 
      { 
       failed = true; 
       wccfg.failedUrls++; 
       throw; 
      } 
     } 

这是使用wccfg如何IM在类的顶部:

private System.Threading.ManualResetEvent busy; 
     WebcrawlerConfiguration wccfg; 
     List<string> currentCrawlingSite; 
     List<string> sitesToCrawl; 
     RetrieveWebContent retwebcontent; 
     public event EventHandler<WebCrawlerProgressEventHandler> ProgressEvent; 
     public bool done; 
     public bool failed; 

     public WebCrawler(WebcrawlerConfiguration webcralwercfg) 
     { 
      failed = false; 
      done = false; 
      currentCrawlingSite = new List<string>(); 
      sitesToCrawl = new List<string>(); 
      busy = new System.Threading.ManualResetEvent(true); 
      wccfg = webcralwercfg; 
     } 
+6

请包括此代码的“尝试”部分。 – Inisheer

+0

你确定'wccfg'在所有'catch'块中都不为空吗? –

+4

认为这可能是你*发现的异常*?你期望什么*扔*做其他事情? –

回答

3

你得到的NullReferenceException,因为你没有初始化一些东西在你之前在你的try块中唱歌。

然后代码进入你的catch(Exception ex)块,它递增计数器,设置failed=true,然后重新抛出NullReferenceException

1

东西是不满调用这个函数:

TimeOut.getHtmlDocumentWebClient(mainUrl, false, "", 0, "", "") 

调试器停在你的throw声明的原因是因为你抓住了原来的异常,从调试器隐藏它。设置你的调试选项为“突破第一次例外” - 然后你会看到异常真正来自哪里,能够检查你的变量等。

通常是一个好主意,#if远离任何catch-所有在调试过程中的异常处理程序,因为它们吞噬了很多重要的错误信息。对于你正在做的事情,无论如何使用try/finally可能会更好。

相关问题