出于某种原因,我想以csv文件的形式从数据库(sqlite3)转储表。我正在使用带有elixir的python脚本(基于sqlalchemy)来修改数据库。我想知道是否有办法将我用过的表转储到csv。
我见过sqlalchemy serializer,但它似乎不是我想要的。我做错了吗?我应该在关闭sqlalchemy会话后调用sqlite3 python module来转储到文件吗?或者我应该使用自制的东西吗?
答案 0 :(得分:30)
稍后修改Peter Hansen的答案,使用SQLAlchemy代替原始数据库访问
import csv
outfile = open('mydump.csv', 'wb')
outcsv = csv.writer(outfile)
records = session.query(MyModel).all()
[outcsv.writerow([getattr(curr, column.name) for column in MyTable.__mapper__.columns]) for curr in records]
# or maybe use outcsv.writerows(records)
outfile.close()
答案 1 :(得分:22)
有很多方法可以实现这一点,包括对os.system()
实用程序进行简单sqlite3
调用(如果已安装),但这里大致是我从Python中做的事情:
import sqlite3
import csv
con = sqlite3.connect('mydatabase.db')
outfile = open('mydump.csv', 'wb')
outcsv = csv.writer(outfile)
cursor = con.execute('select * from mytable')
# dump column titles (optional)
outcsv.writerow(x[0] for x in cursor.description)
# dump rows
outcsv.writerows(cursor.fetchall())
outfile.close()
答案 2 :(得分:17)
我将上述示例改编为基于sqlalchemy的代码,如下所示:
import csv
import sqlalchemy as sqAl
metadata = sqAl.MetaData()
engine = sqAl.create_engine('sqlite:///%s' % 'data.db')
metadata.bind = engine
mytable = sqAl.Table('sometable', metadata, autoload=True)
db_connection = engine.connect()
select = sqAl.sql.select([mytable])
result = db_connection.execute(select)
fh = open('data.csv', 'wb')
outcsv = csv.writer(fh)
outcsv.writerow(result.keys())
outcsv.writerows(result)
fh.close
这适用于sqlalchemy 0.7.9。我想这适用于所有sqlalchemy表和结果对象。
答案 3 :(得分:3)
with open('dump.csv', 'wb') as f:
out = csv.writer(f)
out.writerow(['id', 'description'])
for item in session.query(Queue).all():
out.writerow([item.id, item.description])
如果您不介意手工制作列标签,我发现这很有用。
答案 4 :(得分:1)
import csv
f = open('ratings.csv', 'w')
out = csv.writer(f)
out.writerow(['id', 'user_id', 'movie_id', 'rating'])
for item in db.query.all():
out.writerow([item.username, item.username, item.movie_name, item.rating])
f.close()
答案 5 :(得分:0)
以模块化方式:使用slqalchemy与automap和mysql的示例。
database.py:
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine
Base = automap_base()
engine = create_engine('mysql://user:pass@localhost:3306/database_name', echo=True)
Base.prepare(engine, reflect=True)
# Map the tables
State = Base.classes.states
session = Session(engine, autoflush=False)
export_to_csv.py:
from databases import *
import csv
def export():
q = session.query(State)
file = './data/states.csv'
with open(file, 'w') as csvfile:
outcsv = csv.writer(csvfile, delimiter=',',quotechar='"', quoting = csv.QUOTE_MINIMAL)
header = State.__table__.columns.keys()
outcsv.writerow(header)
for record in q.all():
outcsv.writerow([getattr(record, c) for c in header ])
if __name__ == "__main__":
export()
结果:
名,ABV,国家,is_state,is_lower48,蛞蝓,纬度,经度,人口,面积 阿拉斯加州,AK,美国,Y,N,阿拉斯加,61.370716,-152.404419,710231,571951.25 阿拉巴马州,AL,美国,Y,Y,阿拉巴马州,32.806671,-86.79113,4779736,50744.0 阿肯色州,AR,美国,Y,Y,阿肯色州,34.969704,-92.373123,2915918,52068.17 亚利桑那州,亚利桑那州,美国,Y,Y,亚利桑那州,33.729759,-111.431221,6392017,113634.57 加利福尼亚州,加利福尼亚州,美国,Y,Y,加利福尼亚州,36.116203,-119.681564,37253956,155939.52 科罗拉多州,CO,美国,Y,Y,科罗拉多州,39.059811,-105.311104,5029196,103717.53 康涅狄格,CT,美国,Y,Y,康涅狄格,41.597782,-72.755371,3574097,4844.8 哥伦比亚特区,哥伦比亚特区,美国,n,n,哥伦比亚特区,38.897438,-77.026817,601723,68.34 特拉华州,DE,美国,Y,Y,特拉华州,39.318523,-75.507141,897934,1953.56 佛罗里达州,佛罗里达州,美国,Y,Y,佛罗里达州,27.766279,-81.686783,18801310,53926.82 佐治亚,GA,US,Y,Y,乔治亚州,33.040619,-83.643074,9687653,57906.14
答案 6 :(得分:0)
我知道这很旧,但是我只是遇到了这个问题,这就是我解决的方法
from sqlalchemy import create_engine
basedir = os.path.abspath(os.path.dirname(__file__))
sql_engine = create_engine(os.path.join('sqlite:///' + os.path.join(basedir, 'single_file_app.db')), echo=False)
results = pd.read_sql_query('select * from users',sql_engine)
results.to_csv(os.path.join(basedir, 'mydump2.csv'),index=False,sep=";")
答案 7 :(得分:0)
我花了很多时间寻找解决这个问题的方法,最后创建了这样的东西:
from sqlalchemy import inspect
with open(file_to_write, 'w') as file:
out_csv = csv.writer(file, lineterminator='\n')
columns = [column.name for column in inspect(Movies).columns][1:]
out_csv.writerow(columns)
session_3 = session_maker()
extract_query = [getattr(Movies, col) for col in columns]
for mov in session_3.query(*extract_query):
out_csv.writerow(mov)
session_3.close()
它会创建一个具有列名的CSV文件,并创建整个“电影”表的转储,而没有“ id”主列。