我有以下脚本需要分割606 MB的CSV文件-每个块小于100 MB。我正在重新使用以下来源:https://gist.github.com/jrivero/1085501
我得到多个文件作为输出,每个文件只有1条记录。我可能做错了什么......
它被调用为:
csv_splitter.split(open('C:/folder1/Desktop/toolbox//big_file.csv', 'r'));
文件csv_splitter.py
中的代码为:
import os
def split(filehandler, delimiter=',', row_limit=10000,
output_name_template='output_%s.csv', output_path='.', keep_headers=True):
"""
Splits a CSV file into multiple pieces.
A quick bastardization of the Python CSV library.
Arguments:
`row_limit`: The number of rows you want in each output file. 10,000 by default.
`output_name_template`: A %s-style template for the numbered output files.
`output_path`: Where to stick the output files.
`keep_headers`: Whether or not to print the headers in each output file.
Example usage:
>> from toolbox import csv_splitter;
>> csv_splitter.split(open('/home/ben/input.csv', 'r'));
"""
import csv
reader = csv.reader(filehandler, delimiter=delimiter)
current_piece = 1
current_out_path = os.path.join(
output_path,
output_name_template % current_piece
)
current_out_writer = csv.writer(open(current_out_path, 'w'), delimiter=delimiter)
current_limit = row_limit
if keep_headers:
headers = next(reader)
current_out_writer.writerow(headers)
for i, row in enumerate(reader):
if i + 1 > current_limit:
current_piece += 1
current_limit = row_limit * current_piece
current_out_path = os.path.join(
output_path,
output_name_template % current_piece
)
current_out_writer = csv.writer(open(current_out_path, 'w'), delimiter=delimiter)
if keep_headers:
current_out_writer.writerow(headers)
current_out_writer.writerow(row)