如何仅将LINKS列表作为输出获取? 我已经尝试过其他解决方案,既有beautifulsoup也有千年,但他们仍然给我一个非常类似于我目前正在获取的结果,这是链接和锚文本的href。我试图使用urlparse作为一些旧的答案建议,但似乎该模块不再使用,我对整个事情感到困惑。这是我的代码,目前outtputting链接和锚文本,这是不是我想要的:Python。从Google搜索结果中仅获取href链接内容
import requests, re
from bs4 import BeautifulSoup
headers = {'User-agent':'Mozilla/5.0'}
page = requests.get('https://www.google.com/search?q=Tesla',headers=headers)
soup = BeautifulSoup(page.content,'lxml')
global serpUrls
serpUrls = []
links = soup.findAll('a')
for link in soup.find_all("a",href=re.compile("(?<=/url\?q=)(htt.*://.*)")):
#print(re.split(":(?=http)",link["href"].replace("/url?q=","")))
serpUrls.append(link)
print(serpUrls[0:2])
xmasRegex = re.compile(r"""((?:[a-z][\w-]+:(?:/{1,3}|[a-z0-9%])|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|(([^\s()<>]+|(([^\s()<>]+)))*))+(?:(([^\s()<>]+|(([^\s()<>]+)))*)|[^\s`!()[]{};:'".,<>?«»“”‘’]))""", re.DOTALL)
mo = xmasRegex.findall('[<a href="/url?q=https://www.teslamotors.com/&sa=U&ved=0ahUKEwjvzrTyxvTKAhXHWRoKHUjlBxwQFggUMAA&usg=AFQjCNG1nvN_Z0knKTtEah3whTIObUAhcg"><b>Tesla</b> Motors | Premium Electric Vehicles</a>, <a class="_Zkb" href="/url?q=http://webcache.googleusercontent.com/search%3Fq%3Dcache:rzPQodkDKYYJ:https://www.teslamotors.com/%252BTesla%26gws_rd%3Dcr%26hl%3Des%26%26ct%3Dclnk&sa=U&ved=0ahUKEwjvzrTyxvTKAhXHWRoKHUjlBxwQIAgXMAA&usg=AFQjCNEZ40VWO_fFDjXH09GakUOgODNlHg">En caché</a>]')
print(mo)
我只希望“http://urloflink.com”,而不是整个代码行。任何方式来做到这一点?谢谢!
输出看起来是这样的:
[<a href="/url?q=https://www.teslamotors.com/&sa=U&ved=0ahUKEwjI39vl2_TKAhXFWxoKHRX-CFgQFggUMAA&usg=AFQjCNG1nvN_Z0knKTtEah3whTIObUAhcg"><b>Tesla</b> Motors | Premium Electric Vehicles</a>, <a class="_Zkb" href="/url?q=http://webcache.googleusercontent.com/search%3Fq%3Dcache:rzPQodkDKYYJ:https://www.teslamotors.com/%252BTesla%26gws_rd%3Dcr%26hl%3Des%26%26ct%3Dclnk&sa=U&ved=0ahUKEwjI39vl2_TKAhXFWxoKHRX-CFgQIAgXMAA&usg=AFQjCNEZ40VWO_fFDjXH09GakUOgODNlHg">En caché</a>]
[('https://www.teslamotors.com/&sa=U&ved=0ahUKEwjvzrTyxvTKAhXHWRoKHUjlBxwQFggUMAA&usg=AFQjCNG1nvN_Z0knKTtEah3whTIObUAhcg"', '', '', '', '', '', '', '', ''), ('http://webcache.googleusercontent.com/search%3Fq%3Dcache:rzPQodkDKYYJ:https://www.teslamotors.com/%252BTesla%26gws_rd%3Dcr%26hl%3Des%26%26ct%3Dclnk&sa=U&ved=0ahUKEwjvzrTyxvTKAhXHWRoKHUjlBxwQIAgXMAA&usg=AFQjCNEZ40VWO_fFDjXH09GakUOgODNlHg"', '', '', '', '', '', '', '', '')]
你还在用[正则表达式来解析html吗?](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454) – zondo
我是一个新手,所以我正在使用我所猜测的那是最好的解决方案,但我怀疑它不是,这就是为什么我问。我相信有更好的方法或一些模块可以更轻松地完成。我尝试安装GoogleScraper模块,但由于某种原因,pycharm和pip都无法将其安装在我的电脑上。 – skeitel
我也试过这一点,并没有得到我我需要什么,或者: 结果= driver.find_elements_by_css_selector( 'div.g') 链接=结果[0] .find_element_by_tag_name( “A”) HREF = link.get_attribute (“href”) – skeitel