是否有API调用或任何我未能推翻的脚本会将我的所有Gists从Github拉到外部git repo或者只返回一个他们的名字列表?我知道每一个都是一个单独的git repo,所以我假设我能做的最好就是获得后者,然后脚本将所有这些都放到我当地的盒子里。
编辑1 :我知道将git repos从一个服务拉到另一个服务,我特意找的人有411收集权威的我拥有的所有Gists列表,私人和公共。我还认为这可能对其他人有用。它不是关于迁移,而是备份策略。 。 。各种。
编辑2 :所以,似乎这可能无法实现。我显然没有Google hard enough to search the updated Github/Gist API。其他API调用使用简单的curl命令,但不适用于Gist的v1 API。尽管如此,API还是说所有私人和公共要点的TBD,所以我认为除非一个开明的灵魂勾起了一个brotha,否则就把整个事情放在一边。
$ curl http://github.com/api/v2/json/repos/show/alharaka
{"repositories":[{"url":"https://github.com/alharaka/babushka","has_wiki":true,"homepage":"http:
... # tons of more output
echo $?
0
$
这个不太热。
$ curl https://gist.github.com/api/v1/:format/gists/:alharaka
$ echo $?
0
$
编辑3 :在我被问及之前,我发现API版本存在差异;这个“精彩的黑客”也没有帮助。虽然还很酷。
$ curl https://gist.github.com/api/v2/:format/gists/:alharaka # Notice v2 instead of v1
$ echo $?
0
$
答案 0 :(得分:19)
GitHub API的第3版允许以一种非常简单的方式:
https://api.github.com/users/koraktor/gists
为您提供了用户所有要点的列表,该列表提供了各种不同的网址,包括各个要点的API网址,如
https://api.github.com/gists/921286
答案 1 :(得分:13)
an adaptation in API v3 nicerobot's script,最初是为API v1编写的:
#!/usr/bin/env python
# Clone or update all a user's gists
# curl -ks https://raw.github.com/gist/5466075/gist-backup.py | USER=fedir python
# USER=fedir python gist-backup.py
import json
import urllib
from subprocess import call
from urllib import urlopen
import os
import math
USER = os.environ['USER']
perpage=30.0
userurl = urlopen('https://api.github.com/users/' + USER)
public_gists = json.load(userurl)
gistcount = public_gists['public_gists']
print "Found gists : " + str(gistcount)
pages = int(math.ceil(float(gistcount)/perpage))
print "Found pages : " + str(pages)
f=open('./contents.txt', 'w+')
for page in range(pages):
pageNumber = str(page + 1)
print "Processing page number " + pageNumber
pageUrl = 'https://api.github.com/users/' + USER + '/gists?page=' + pageNumber + '&per_page=' + str(int(perpage))
u = urlopen (pageUrl)
gists = json.load(u)
startd = os.getcwd()
for gist in gists:
gistd = gist['id']
gistUrl = 'git://gist.github.com/' + gistd + '.git'
if os.path.isdir(gistd):
os.chdir(gistd)
call(['git', 'pull', gistUrl])
os.chdir(startd)
else:
call(['git', 'clone', gistUrl])
if gist['description'] == None:
description = ''
else:
description = gist['description'].encode('utf8').replace("\r",' ').replace("\n",' ')
print >> f, gist['id'], gistUrl, description
答案 2 :(得分:5)
一个版本的@Fedir脚本,它解释了Github的分页(如果你有几百个主题):
#!/usr/bin/env python
# Clone or update all a user's gists
# curl -ks https://raw.github.com/gist/5466075/gist-backup.py | USER=fedir python
# USER=fedir python gist-backup.py
import json
import urllib
from subprocess import call
from urllib import urlopen
import os
import math
USER = os.environ['USER']
perpage=30.0
userurl = urlopen('https://api.github.com/users/' + USER)
public_gists = json.load(userurl)
gistcount = public_gists['public_gists']
print "Found gists : " + str(gistcount)
pages = int(math.ceil(float(gistcount)/perpage))
print "Found pages : " + str(pages)
f=open('./contents.txt', 'w+')
for page in range(pages):
pageNumber = str(page + 1)
print "Processing page number " + pageNumber
pageUrl = 'https://api.github.com/users/' + USER + '/gists?page=' + pageNumber + '&per_page=' + str(int(perpage))
u = urlopen (pageUrl)
gists = json.load(u)
startd = os.getcwd()
for gist in gists:
gistd = gist['id']
gistUrl = 'git://gist.github.com/' + gistd + '.git'
if os.path.isdir(gistd):
os.chdir(gistd)
call(['git', 'pull', gistUrl])
os.chdir(startd)
else:
call(['git', 'clone', gistUrl])
答案 3 :(得分:3)
基于this answer中的提示,我编写了这个简单的Python脚本,它可以帮助我。
这是非常简洁的代码,几乎没有任何错误检查,并将所有用户的要点克隆到当前目录中。
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Clone all gists of GitHub username given on the command line."""
import subprocess
import sys
import requests
if len(sys.argv) > 1:
gh_user = sys.argv[1]
else:
print("Usage: clone-gists.py <GitHub username>")
sys.exit(1)
req = requests.get('https://api.github.com/users/%s/gists' % gh_user)
for gist in req.json():
ret = subprocess.call(['git', 'clone', gist['git_pull_url']])
if ret != 0:
print("ERROR cloning gist %s. Please check output." % gist['id'])
有关处理更新的版本,请参阅https://gist.github.com/SpotlightKid/042491a9a2987af04a5a。
答案 4 :(得分:2)
我写了一个快速的node.js脚本作为练习,下载所有要点并使用与原始要点相同的文件名保存它们在与“gist description”名称匹配的文件夹中。 https://gist.github.com/thomastraum/5227541
var request = require('request')
, path = require('path')
, fs = require('fs')
, url = "https://api.github.com/users/thomastraum/gists"
, savepath = './gists';
request(url, function (error, response, body) {
if (!error && response.statusCode == 200) {
gists = JSON.parse( body );
gists.forEach( function(gist) {
console.log( "description: ", gist.description );
var dir = savepath + '/' + gist.description;
fs.mkdir( dir, function(err){
for(var file in gist.files){
var raw_url = gist.files[file].raw_url;
var filename = gist.files[file].filename;
console.log( "downloading... " + filename );
request(raw_url).pipe(fs.createWriteStream( dir + '/' + filename ));
}
});
});
}
});
答案 5 :(得分:2)
除了Thomas Traum's几个答案之外。似乎用户代理现在必须:http://developer.github.com/v3/#user-agent-required。
所以我在https://github.com/sanusart/gists-backup做了我自己的运动。 它也知道分页,重复描述和缺少描述。
答案 6 :(得分:1)
This ruby gem似乎有助于解决您的问题。 我还没有尝试过,但看起来很有希望。
第一
gem install gisty
你需要把
export GISTY_DIR="$HOME/dev/gists"
你的.bashrc或.zshrc中的这个目录是你的要点保存的地方。
你需要
git config --global github.user your_id
git config --global github.token your_token
在.gitconfig上添加以上配置
<强>用法强>
gisty post file1 file2 ...
将file1和file2发布到您的要点
gisty private_post file1 file2 ...
私下发布file1和file2
gisty sync
与所有要点同步
gisty pull_all
拉到本地仓库
gisty list
列出克隆的本地要点回购
答案 7 :(得分:0)
如果你需要做的只是从特定用户下载所有要点,那么这个简单的python脚本将有所帮助。
特定用户的要点信息通过API公开
"https://api.github.com/users/" + username + "/gists"
您可以简单地遍历API公开的JSON,获取要点列表,执行克隆,或者只是使用指定的原始URL下载要点。下面的简单脚本循环遍历JSON,提取文件名和原始URL并下载所有要点并将其保存在本地文件夹中。
import requests
# Replace username with correct username
url = "https://api.github.com/users/" + username + "/gists"
resp = requests.get(url)
gists = resp.json()
for gist in gists:
for file in gist["files"]:
fname = gist["files"][file]["filename"]
furl = gist["files"][file]["raw_url"]
print("{}:{}".format(fname, furl)) # This lists out all gists
Use this to download all gists
pyresp = requests.get(furl)
with open("../folder/" + fname, "wb") as pyfile:
for chunk in pyresp.iter_content(chunk_size=1024):
if chunk:
pyfile.write(chunk)
print("{} downloaded successfully".format(fname))
答案 8 :(得分:0)
2021 年 3 月更新 (Python3)
如果用户有大量具有相同文件名的要点,这很管用。
import requests, json, time, uuid
headers = {"content-type" : "application/json"}
url = 'https://api.github.com/users/ChangeToYourTargetUser/gists?per_page=100&page='
for page in range(1,100): #do pages start at 1 or 0?
print('page: ' + str(page))
r = requests.get(url+str(page), headers = headers)
metadata_file = './data/my_gist_list.json'
# Getting metadata
prettyJson = json.dumps(r.json(), indent=4, sort_keys=True)
f = open(metadata_file, 'w')
f.write(prettyJson)
print('Metadata obtained as {}'.format(metadata_file))
# Downloading files
data = r.json()
counter = 0
for i in data:
time.sleep(1.1)
files_node = i['files']
file_name = [k for k in files_node][0]
r = requests.get(files_node[file_name]['raw_url'])
f = open('./data/{}'.format(str(uuid.uuid4())), 'w')
f.write(r.text)
f.close()
print('Download' + str(i))
counter += 1
print('{} files successfully downloaded.'.format(counter))
答案 9 :(得分:0)
我使用它,它就像一个魅力!
# first: mkdir user && cd user && cp /path/to/get_gists.py .
# python3 get_gists.py user
import requests
import sys
from subprocess import call
user = sys.argv[1]
r = requests.get('https://api.github.com/users/{0}/gists'.format(user))
for i in r.json():
call(['git', 'clone', i['git_pull_url']])
description_file = './{0}/description.txt'.format(i['id'])
with open(description_file, 'w') as f:
f.write('{0}\n'.format(i['description']))
答案 10 :(得分:0)
那么 GitHub CLI 呢?
brew install gh
gh auth login
gh gist list [flags]
Options:
-L, --limit int Maximum number of gists to fetch (default 10)
--public Show only public gists
--secret Show only secret gists
gh gist clone <gist> [<directory>] [-- <gitflags>...]