我有一个巨大的文本文件(~1GB),遗憾的是我使用的文本编辑器不会读取这么大的文件。但是,如果我可以将它分成两个或三个部分,我会没事的,所以,作为一个练习,我想在python中编写一个程序来完成它。
我认为我希望程序要做的是找到文件的大小,将该数字分成几部分,对于每个部分,以块的形式读取到该点,写入文件名 .nnn输出文件,然后读取到下一个换行符并写入,然后关闭输出文件等。显然,最后一个输出文件只是复制到输入文件的末尾。
你能帮我解决关键文件系统相关的部分:文件大小,读取和写入块并读取换行符吗?
我将首先编写此代码测试,所以没有必要给我一个完整的答案,除非它是一行代码; - )
答案 0 :(得分:32)
linux有一个拆分命令
拆分-l 100000 file.txt
将拆分为等于100,000行大小的文件
答案 1 :(得分:15)
查看os.stat()
的文件大小和file.readlines([sizehint])
。这两个函数应该是阅读部分所需要的,希望你知道如何写作:)
答案 2 :(得分:9)
作为替代方法,使用日志库:
>>> import logging.handlers
>>> log = logging.getLogger()
>>> fh = logging.handlers.RotatingFileHandler("D://filename.txt",
maxBytes=2**20*100, backupCount=100)
# 100 MB each, up to a maximum of 100 files
>>> log.addHandler(fh)
>>> log.setLevel(logging.INFO)
>>> f = open("D://biglog.txt")
>>> while True:
... log.info(f.readline().strip())
您的文件将显示如下:
filename.txt(文件结尾)
filename.txt.1
filename.txt.2
...
filename.txt.10(文件开头)
这是一种快速简便的方法,可以使大型日志文件与RotatingFileHandler
实施相匹配。
答案 3 :(得分:5)
这种生成器方法是一种(慢速)方式来获取一条线而不会耗尽你的记忆。
import itertools
def slicefile(filename, start, end):
lines = open(filename)
return itertools.islice(lines, start, end)
out = open("/blah.txt", "w")
for line in slicefile("/python27/readme.txt", 10, 15):
out.write(line)
答案 4 :(得分:4)
您可以使用wc
和split
(请参阅相应的联机帮助页)以获得所需的效果。在bash
:
split -dl$((`wc -l 'filename'|sed 's/ .*$//'` / 3 + 1)) filename filename-chunk.
生成相同行数的3个部分(当然,最后一个包含舍入错误),名为filename-chunk.00
到filename-chunk.02
。
答案 5 :(得分:4)
def getSomeChunk(filename, start, len):
fobj = open(filename, 'r+b')
m = mmap.mmap(fobj.fileno(), 0)
return m[start:start+len]
答案 6 :(得分:4)
虽然Ryan Ginstrom's answer是正确的,但它确实需要更长的时间(正如他已经指出的那样)。这是通过连续迭代打开的文件描述符来绕过对itertools.islice
的多次调用的方法:
def splitfile(infilepath, chunksize):
fname, ext = infilepath.rsplit('.',1)
i = 0
written = False
with open(infilepath) as infile:
while True:
outfilepath = "{}{}.{}".format(fname, i, ext)
with open(outfilepath, 'w') as outfile:
for line in (infile.readline() for _ in range(chunksize)):
outfile.write(line)
written = bool(line)
if not written:
break
i += 1
答案 7 :(得分:4)
现在,有一个pypi模块可用于将任何大小的文件拆分成块。看看这个
答案 8 :(得分:2)
我编写了程序,似乎工作正常。感谢Kamil Kisiel让我开始。
(注意FileSizeParts()是这里没有显示的函数)
稍后我可能会做一个执行二进制读取的版本以查看它是否更快。
def Split(inputFile,numParts,outputName):
fileSize=os.stat(inputFile).st_size
parts=FileSizeParts(fileSize,numParts)
openInputFile = open(inputFile, 'r')
outPart=1
for part in parts:
if openInputFile.tell()<fileSize:
fullOutputName=outputName+os.extsep+str(outPart)
outPart+=1
openOutputFile=open(fullOutputName,'w')
openOutputFile.writelines(openInputFile.readlines(part))
openOutputFile.close()
openInputFile.close()
return outPart-1
答案 9 :(得分:2)
usage - split.py filename splitsizeinkb
import os
import sys
def getfilesize(filename):
with open(filename,"rb") as fr:
fr.seek(0,2) # move to end of the file
size=fr.tell()
print("getfilesize: size: %s" % size)
return fr.tell()
def splitfile(filename, splitsize):
# Open original file in read only mode
if not os.path.isfile(filename):
print("No such file as: \"%s\"" % filename)
return
filesize=getfilesize(filename)
with open(filename,"rb") as fr:
counter=1
orginalfilename = filename.split(".")
readlimit = 5000 #read 5kb at a time
n_splits = filesize//splitsize
print("splitfile: No of splits required: %s" % str(n_splits))
for i in range(n_splits+1):
chunks_count = int(splitsize)//int(readlimit)
data_5kb = fr.read(readlimit) # read
# Create split files
print("chunks_count: %d" % chunks_count)
with open(orginalfilename[0]+"_{id}.".format(id=str(counter))+orginalfilename[1],"ab") as fw:
fw.seek(0)
fw.truncate()# truncate original if present
while data_5kb:
fw.write(data_5kb)
if chunks_count:
chunks_count-=1
data_5kb = fr.read(readlimit)
else: break
counter+=1
if __name__ == "__main__":
if len(sys.argv) < 3: print("Filename or splitsize not provided: Usage: filesplit.py filename splitsizeinkb ")
else:
filesize = int(sys.argv[2]) * 1000 #make into kb
filename = sys.argv[1]
splitfile(filename, filesize)
答案 10 :(得分:1)
这对我有用
import os
fil = "inputfile"
outfil = "outputfile"
f = open(fil,'r')
numbits = 1000000000
for i in range(0,os.stat(fil).st_size/numbits+1):
o = open(outfil+str(i),'w')
segment = f.readlines(numbits)
for c in range(0,len(segment)):
o.write(segment[c]+"\n")
o.close()
答案 11 :(得分:0)
或者,一个python版本的wc和split:
lines = 0
for l in open(filename): lines += 1
然后一些代码将第一行/ 3读入一个文件,下一行/ 3读入另一个,等等。
答案 12 :(得分:0)
我要求将csv文件拆分为导入到Dynamics CRM中,因为导入的文件大小限制为8MB,而我们收到的文件要大得多。此程序允许用户输入FileNames和LinesPerFile,然后将指定的文件拆分为请求的行数。我无法相信它有多快!
# user input FileNames and LinesPerFile
FileCount = 1
FileNames = []
while True:
FileName = raw_input('File Name ' + str(FileCount) + ' (enter "Done" after last File):')
FileCount = FileCount + 1
if FileName == 'Done':
break
else:
FileNames.append(FileName)
LinesPerFile = raw_input('Lines Per File:')
LinesPerFile = int(LinesPerFile)
for FileName in FileNames:
File = open(FileName)
# get Header row
for Line in File:
Header = Line
break
FileCount = 0
Linecount = 1
for Line in File:
#skip Header in File
if Line == Header:
continue
#create NewFile with Header every [LinesPerFile] Lines
if Linecount % LinesPerFile == 1:
FileCount = FileCount + 1
NewFileName = FileName[:FileName.find('.')] + '-Part' + str(FileCount) + FileName[FileName.find('.'):]
NewFile = open(NewFileName,'w')
NewFile.write(Header)
NewFile.write(Line)
Linecount = Linecount + 1
NewFile.close()
答案 13 :(得分:0)
这是一个python脚本,可用于使用subprocess
分割大文件:
"""
Splits the file into the same directory and
deletes the original file
"""
import subprocess
import sys
import os
SPLIT_FILE_CHUNK_SIZE = '5000'
SPLIT_PREFIX_LENGTH = '2' # subprocess expects a string, i.e. 2 = aa, ab, ac etc..
if __name__ == "__main__":
file_path = sys.argv[1]
# i.e. split -a 2 -l 5000 t/some_file.txt ~/tmp/t/
subprocess.call(["split", "-a", SPLIT_PREFIX_LENGTH, "-l", SPLIT_FILE_CHUNK_SIZE, file_path,
os.path.dirname(file_path) + '/'])
# Remove the original file once done splitting
try:
os.remove(file_path)
except OSError:
pass
您可以在外部调用它:
import os
fs_result = os.system("python file_splitter.py {}".format(local_file_path))
您也可以导入subprocess
并直接在程序中运行。
这种方法的问题在于高内存使用率:subprocess
创建一个内存占用内存大小与您的进程相同的分支,如果您的进程内存已经很大,它会在运行时加倍。与os.system
相同的事情。
这是另一种纯粹的python方式,虽然我还没有在大文件上测试它,它会变慢但是更精简内存:
CHUNK_SIZE = 5000
def yield_csv_rows(reader, chunk_size):
"""
Opens file to ingest, reads each line to return list of rows
Expects the header is already removed
Replacement for ingest_csv
:param reader: dictReader
:param chunk_size: int, chunk size
"""
chunk = []
for i, row in enumerate(reader):
if i % chunk_size == 0 and i > 0:
yield chunk
del chunk[:]
chunk.append(row)
yield chunk
with open(local_file_path, 'rb') as f:
f.readline().strip().replace('"', '')
reader = unicodecsv.DictReader(f, fieldnames=header.split(','), delimiter=',', quotechar='"')
chunks = yield_csv_rows(reader, CHUNK_SIZE)
for chunk in chunks:
if not chunk:
break
# Do something with your chunk here
以下是使用readlines()
的另一个示例:
"""
Simple example using readlines()
where the 'file' is generated via:
seq 10000 > file
"""
CHUNK_SIZE = 5
def yield_rows(reader, chunk_size):
"""
Yield row chunks
"""
chunk = []
for i, row in enumerate(reader):
if i % chunk_size == 0 and i > 0:
yield chunk
del chunk[:]
chunk.append(row)
yield chunk
def batch_operation(data):
for item in data:
print(item)
with open('file', 'r') as f:
chunks = yield_rows(f.readlines(), CHUNK_SIZE)
for _chunk in chunks:
batch_operation(_chunk)
答案 14 :(得分:0)
您可以实现将任何文件拆分为如下所示的块,这里CHUNK_SIZE为500000字节(500kb),内容可以是任何文件:
for idx,val in enumerate(get_chunk(content, CHUNK_SIZE)):
data=val
index=idx
def get_chunk(content,size):
for i in range(0,len(content),size):
yield content[i:i+size]