我只是做了一个小脚本来做到这一点。 它可能无法处理大型存储库,因为它不处理GitHub的速率限制。它也需要Python requests包。
#!/bin/env python3.4
import requests
GITHUB_API_BRANCHES = 'https://%(token)[email protected]/repos/%(namespace)s/%(repository)s/branches'
GUTHUB_API_COMMITS = 'https://%(token)[email protected]/repos/%(namespace)s/%(repository)s/commits?sha=%(sha)s&page=%(page)i'
def github_commit_counter(namespace, repository, access_token=''):
commit_store = list()
branches = requests.get(GITHUB_API_BRANCHES % {
'token': access_token,
'namespace': namespace,
'repository': repository,
}).json()
print('Branch'.ljust(47), 'Commits')
print('-' * 55)
for branch in branches:
page = 1
branch_commits = 0
while True:
commits = requests.get(GUTHUB_API_COMMITS % {
'token': access_token,
'namespace': namespace,
'repository': repository,
'sha': branch['name'],
'page': page
}).json()
page_commits = len(commits)
for commit in commits:
commit_store.append(commit['sha'])
branch_commits += page_commits
if page_commits == 0:
break
page += 1
print(branch['name'].ljust(45), str(branch_commits).rjust(9))
commit_store = set(commit_store)
print('-' * 55)
print('Total'.ljust(42), str(len(commit_store)).rjust(12))
# for private repositories, get your own token from
# https://github.com/settings/tokens
# github_commit_counter('github', 'gitignore', access_token='fnkr:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
github_commit_counter('github', 'gitignore')
可能重复[github api:如何高效地查找存储库的提交数量?](http://stackoverflow.com/questions/15919539/github-api-how-to-efficiently-find数据库提交的数量) –
不是同一个问题。虽然谢谢! – SteveCoffman