我试图尽快读取大文本文件。
所以这是我的代码
BufferedReader br = new BufferedReader(new FileReader("C:\\Users\\Documents\\ais_messages1.3.txt"));
String line, aisLines="", cvsSplitBy = ",";
try {
while ((line = br.readLine()) != null) {
if(line.charAt(0) == '!') {
String[] cols = line.split(cvsSplitBy);
if(cols.length>=8) {
line = "";
for(int i=0; i<cols.length-1; i++) {
if(i == cols.length-2) {
line = line + cols[i];
} else {
line = line + cols[i] + ",";
}
}
aisLines += line + "\n";
} else {
aisLines += line + "\n";
}
}
}
} catch (IOException e) {
e.printStackTrace();
}
现在它在14秒内读取36890行。我还尝试了一个InputStreamReader:
InputStreamReader isr = new InputStreamReader(new FileInputStream("C:\\Users\\Documents\\ais_messages1.3.txt"));
BufferedReader br = new BufferedReader(isr);
花了相同的时间。有没有更快的方法来读取大文本文件(100,000或1,000,000行)?
答案 0 :(得分:3)
停止尝试将aisLines
构建为一个大字符串。使用附加行的ArrayList<String>
。这需要0.6%的时间作为我的机器上的方法。 (此代码在0.75秒内处理1,000,000条简单行。)并且它将减少以后处理数据所需的工作量,因为它已经被行分割。
BufferedReader br = new BufferedReader(new FileReader("data.txt"));
List<String> aisLines = new ArrayList<String>();
String line, cvsSplitBy = ",";
try {
while ((line = br.readLine()) != null) {
if(line.charAt(0) == '!') {
String[] cols = line.split(cvsSplitBy);
if(cols.length>=8) {
line = "";
for(int i=0; i<cols.length-1; i++) {
if(i == cols.length-2) {
line = line + cols[i];
} else {
line = line + cols[i] + ",";
}
}
aisLines.add(line);
} else {
aisLines.add(line);
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
如果你最后想要一个大String
(因为你正在与其他人的代码连接,或者其他什么),那么将ArrayList
转换回单个代码仍然会更快字符串,而不是做你正在做的事情。
答案 1 :(得分:1)
由于最耗费的操作是IO,最有效的方法是拆分线程以进行解析和读取:
private static void readFast(String filePath) throws IOException, InterruptedException {
ExecutorService executor = Executors.newWorkStealingPool();
BufferedReader br = new BufferedReader(new FileReader(filePath));
List<String> parsed = Collections.synchronizedList(new ArrayList<>());
try {
String line;
while ((line = br.readLine()) != null) {
final String l = line;
executor.submit(() -> {
if (l.charAt(0) == '!') {
parsed.add(parse(l));
}
});
}
} catch (IOException e) {
e.printStackTrace();
}
executor.shutdown();
executor.awaitTermination(1000, TimeUnit.MINUTES);
String result = parsed.stream().collect(Collectors.joining("\n"));
}
对于我的电脑,它的速度为386毫秒,而速度为10787毫秒
答案 2 :(得分:0)
您可以使用单线程读取大型csv文件,多个线程解析所有行。我的方法是使用Producer-Consumer
模式和BlockingQueue。
<强>生产者强>
创建一个生产者线程,它只负责读取csv文件的行,并将行存储到BlockingQueue中。生产者方面没有做任何其他事情。
<强>消费者强>
制作多个消费者线程,将相同的BlockingQueue对象传递给您的消费者。在Consumer Thread类中实现耗时的工作。
以下代码为您提供解决问题的想法,而不是解决方案。 我是用python实现的,它比使用单个线程做的更快。语言不是java,但背后的理论是相同的。
import multiprocessing
import Queue
QUEUE_SIZE = 2000
def produce(file_queue, row_queue,):
while not file_queue.empty():
src_file = file_queue.get()
zip_reader = gzip.open(src_file, 'rb')
try:
csv_reader = csv.reader(zip_reader, delimiter=SDP_DELIMITER)
for row in csv_reader:
new_row = process_sdp_row(row)
if new_row:
row_queue.put(new_row)
finally:
zip_reader.close()
def consume(row_queue):
'''processes all rows, once queue is empty, break the infinit loop'''
while True:
try:
# takes a row from queue and process it
pass
except multiprocessing.TimeoutError as toe:
print "timeout, all rows have been processed, quit."
break
except Queue.Empty:
print "all rows have been processed, quit."
break
except Exception as e:
print "critical error"
print e
break
def main(args):
file_queue = multiprocessing.Queue()
row_queue = multiprocessing.Queue(QUEUE_SIZE)
file_queue.put(file1)
file_queue.put(file2)
file_queue.put(file3)
# starts 3 producers
for i in xrange(4):
producer = multiprocessing.Process(target=produce,args=(file_queue,row_queue))
producer.start()
# starts 1 consumer
consumer = multiprocessing.Process(target=consume,args=(row_queue,))
consumer.start()
# blocks main thread until consumer process finished
consumer.join()
# prints statistics results after consumer is done
sys.exit(0)
if __name__ == "__main__":
main(sys.argv[1:])