Linux CSV根据大文件中的旧日期删除重复项(超过10万条记录)

时间:2014-11-20 07:28:53

标签: linux sorting csv

我们有以下CSV文件,其中包含

DCR_Path,Direction for Translation,Date & Time

data1,Send for Translation To CTM,Sep 30 2014 03:22

data2,Send for Translation To CTM,Sep 30 2014 02:21

data1,Send for Translation To CTM,Sep 30 2014 03:23

data1,Send for Translation To CTM,Sep 30 2013 03:24

data3,Send for Translation To CTM,Sep 30 2014 03:10

data2,Send for Translation To CTM,Sep 30 2014 02:22

data1,Send for Translation To CTM,Sep 30 2014 02:20

我需要采取最新的并删除其他副本,输出应该是:

DCR_Path,Direction for Translation,Date & Time

data1,Send for Translation To CTM,Sep 30 2014 03:23

data2,Send for Translation To CTM,Sep 30 2014 02:22

data3,Send for Translation To CTM,Sep 30 2014 03:10

我尝试了下面的命令,但是没有正确删除基于旧日期的数据以获取大量记录

awk -F ',' '{ if (Z) { "(date --date=\""$3"\" +\"%s\")" | getline X ; if (Y[$1] < X) {     Y[$1] = X; C[$1] = $0 } } else { Z = $0 } } END { print Z ; for (V in C) { print C[V] } }' < _YOUR_FILE_

抛出以下异常

awk: (FILENAME=merged-2014-11-12.csv FNR=145116) fatal: cannot open pipe `(date --date="Nov 6 2014 02:53 " +"%s")' (Too many open files)

以下是我正在使用的文件的位置..

https://drive.google.com/file/d/0B-v5SOZ1TWo-TEFGV05ZZFFwcXM/view?usp=sharing

1 个答案:

答案 0 :(得分:0)

由于date子进程数量巨大,您似乎对打开文件描述符有某种限制。 Perl似乎是一个更好的候选者,它可以在一个过程中完成所有事情。

#!/usr/bin/perl -nl
if ($. == 1) { print; next }
my ($key, $action, $date) = split /,/;
my ($mo, $d, $y, $h, $m) = split / |:/, $date;
$mo = {Jan=>0,Feb=>1,Mar=>2,Apr=>3,May=>4,Jun=>5,Jul=>6,Aug=>7,Sep=>8,Oct=>9,Nov=>10,Dec=>11}->{$mo};
my $m_cmp = $m + 60*$h + 24*60*$d + 31*24*60*$mo + 12*31*24*60*$y;
$dcr{$key} = [ $action, $date, $m_cmp ] if !$dcr{$key} || $m_cmp > $dcr{$key}->[2];
END {
    print join(",", $_, @{$dcr{$_}}[0,1] ) foreach (sort keys %dcr);
}