雅虎在下面的脚本中为其某些数据获取请求。如果您查看开发人员工具的网络选项卡并刷新NOA股票的页面,则应该看到'NOA?formatt ...'。点击这个,然后查看响应对象以查看一些数据。您需要以下脚本的请求模块才能工作:pip install requests
。
# get_mean_recs.py
import csv
from datetime import datetime
import requests
import sys
get_date = lambda : datetime.utcnow().strftime('%d-%m-%Y')
lhs_url = 'https://query2.finance.yahoo.com/v10/finance/quoteSummary/'
rhs_url = '?formatted=true&crumb=swg7qs5y9UP&lang=en-US®ion=US&' \
'modules=upgradeDowngradeHistory,recommendationTrend,' \
'financialData,earningsHistory,earningsTrend,industryTrend&' \
'corsDomain=finance.yahoo.com'
def get_mean_rec(ticker):
url = lhs_url + ticker + rhs_url
r = requests.get(url)
if not r.ok:
return -1
result = r.json()['quoteSummary']['result'][0]
return result['financialData']['recommendationMean']['fmt']
def read_from_csv(fn):
with open(fn, 'r') as f:
reader = csv.reader(f)
for line in reader:
for ticker in line:
yield ticker
def write_to_csv(fn, data):
with open(fn, 'a') as f:
fieldnames = data[0].keys()
writer = csv.DictWriter(f, fieldnames=fieldnames)
for item in data:
writer.writerow(item)
def assemble_dict(ticker):
return {
'ticker': ticker,
'mean_rec': get_mean_rec(ticker),
'utc_date': get_date()
}
def main():
in_fn = sys.argv[1]
out_fn = sys.argv[2]
data = [assemble_dict(ticker) for ticker in read_from_csv(in_fn)]
write_to_csv(out_fn, data)
if __name__ == '__main__':
main()
用法:
python get_mean_recs.py input.csv output.csv
来源
2017-02-14 23:05:01
Jay
你尝试过什么?您是否尝试过[BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/)? – 9000
我使用的是lxml,你会推荐BeautifulSoup吗? –
雅虎有没有API?使用API比刮取更可靠,因为API是自动系统使用的,而网站通常不是。你可以建立一个伟大的刮板,并找到你被雅虎阻止。 – halfer