我有这个非常简单的groovy代码:
sql.eachRow("""SELECT
LOOP_ID,
FLD_1,
... 20 more fields
FLD_20
FROM MY_TABLE ORDER BY LOOP_ID"""){ res->
if(oldLoopId != res.loop_id){
oldLoopId = res.loop_id
fileToWrite = new File("MYNAME_${type}_${res.loop_id}_${today.format('YYYYmmDDhhMM')}.txt")
fileToWrite.append("20 fields header\n")
}
fileToWrite.append("${res.FLD_1}|${res.FLD_2}| ... |${res.FLD_20}\n");
}
}
它从表中选择内容并写入数据库。对于每个新的loop_id,它会创建一个新文件。问题是写50mb文件大约需要15分钟。
如何让它更快?
答案 0 :(得分:1)
尝试直接写BufferedWriter
而非直接使用append
:
sql.eachRow("""SELECT
LOOP_ID,
FLD_1,
... 20 more fields
FLD_20
FROM MY_TABLE ORDER BY LOOP_ID""") { res ->
def writer
if (oldLoopId != res.loop_id) {
oldLoopId = res.loop_id
def fileToWrite = new File("MYNAME_${type}_${res.loop_id}_${today.format('YYYYmmDDhhMM')}.txt")
if (writer != null) { writer.close() }
writer = fileToWrite.newWriter()
writer.append("20 fields header\n")
}
writer.append("${res.FLD_1}|${res.FLD_2}| ... |${res.FLD_20}\n");
File::withWriter
自动关闭资源,但是为了使用它,您需要做更多的旅行DB,获取所有loop_id
并获取每个数据。
以下脚本:
f=new File("b.txt")
f.write ""
(10 * 1024 * 1024).times { f.append "b" }
执行:
$ time groovy Appends.groovy
real 1m9.217s
user 0m45.375s
sys 0m31.902s
使用BufferedWriter
:
w = new File("/tmp/a.txt").newWriter()
(10 * 1024 * 1024).times { w.write "a" }
执行:
$ time groovy Writes.groovy
real 0m1.774s
user 0m1.688s
sys 0m0.872s