我想通过爬行浏览URL列表中是否存在特定的链接。我写了下面的程序,它完美的工作。但是,我被困在2个地方。使用Python从URL列表中查找特定的URL
- 而不是使用数组,我怎样才能从文本文件调用链接。
- 爬行器需要4分钟的时间来抓取100个网页。
有没有一种方法可以让我更快。
from bs4 import BeautifulSoup, SoupStrainer
import urllib2
import re
import threading
start = time.time()
#Links I want to find
url = "example.com/one", "example.com/two", "example.com/three"]
#Links I want to find the above links in...
url_list =["example.com/1000", "example.com/1001", "example.com/1002",
"example.com/1003", "example.com/1004"]
print_lock = threading.Lock()
#with open("links.txt") as f:
# url_list1 = [url.strip() for url in f.readlines()]
def fetch_url(url):
for line1 in url_list:
print "Crawled" " " + line1
try:
html_page = urllib2.urlopen(line1)
soup = BeautifulSoup(html_page)
link = soup.findAll(href=True)
except urllib2.HTTPError:
pass
for link1 in link:
url1 = link1.get("href")
for url_input in url:
if url_input in url1:
with print_lock:
print 'Found' " " +url_input+ " " 'in'+ " " + line1
threads = [threading.Thread(target=fetch_url, args=(url,)) for url in url_list]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print('Entire job took:',time.time() - start)
由于科里的例子。总是与之斗争。 –
。我已经根据几个例子进行了编辑。程序速度要快得多,但是输出会多次打印相同的答案,有时输出错误。我使用lock()函数来防止它...不工作。我还没有想出多线程。这里的任何帮助都非常感谢。提前致谢。 –