我有多个项目包含不同数量的csv文件,我正在使用SuperCSV CsvBeanReader来执行映射和单元格验证。我已经为每个csv文件创建了一个bean并且已经覆盖了;每个bean的equals,hashCode和toString。
我正在寻找关于执行csv行重复识别的最佳“所有项目”实现方法的建议。报告(不删除)原始csv行号和行内容,以及找到的所有重复行的行号和行内容。有些文件可以达到数十万行,超过GB加上大小,并希望最大限度地减少每个文件的读取次数,并认为可以在CsvBeanReader打开文件时完成。
提前谢谢你。
答案 0 :(得分:2)
鉴于您的文件大小以及您想要原始和重复的行内容这一事实,我认为您可以做的最好的是2次传递文件。
如果您只想要复制的最新行内容,则可以通过1次传递。跟踪原始行的内容以及1遍中的所有重复内容意味着您必须存储每行的内容 - 您可能会耗尽内存。
我的解决方案假设两个具有相同hashCode()
的bean是重复的。如果你必须使用equals()
那么它会变得更复杂。
传递1:识别重复项(记录每个项的行号) 重复哈希)
通过2:报告重复
通过1:识别重复
/**
* Finds the row numbers with duplicate records (using the bean's hashCode()
* method). The key of the returned map is the hashCode and the value is the
* Set of duplicate row numbers for that hashcode.
*
* @param reader
* the reader
* @param preference
* the preferences
* @param beanClass
* the bean class
* @param processors
* the cell processors
* @return the map of duplicate rows (by hashcode)
* @throws IOException
*/
private static Map<Integer, Set<Integer>> findDuplicates(
final Reader reader, final CsvPreference preference,
final Class<?> beanClass, final CellProcessor[] processors)
throws IOException {
ICsvBeanReader beanReader = null;
try {
beanReader = new CsvBeanReader(reader, preference);
final String[] header = beanReader.getHeader(true);
// the hashes of any duplicates
final Set<Integer> duplicateHashes = new HashSet<Integer>();
// the hashes for each row
final Map<Integer, Set<Integer>> rowNumbersByHash =
new HashMap<Integer, Set<Integer>>();
Object o;
while ((o = beanReader.read(beanClass, header, processors)) != null) {
final Integer hashCode = o.hashCode();
// get the row no's for the hash (create if required)
Set<Integer> rowNumbers = rowNumbersByHash.get(hashCode);
if (rowNumbers == null) {
rowNumbers = new HashSet<Integer>();
rowNumbersByHash.put(hashCode, rowNumbers);
}
// add the current row number to its hash
final Integer rowNumber = beanReader.getRowNumber();
rowNumbers.add(rowNumber);
if (rowNumbers.size() == 2) {
duplicateHashes.add(hashCode);
}
}
// create a new map with just the duplicates
final Map<Integer, Set<Integer>> duplicateRowNumbersByHash =
new HashMap<Integer, Set<Integer>>();
for (Integer duplicateHash : duplicateHashes) {
duplicateRowNumbersByHash.put(duplicateHash,
rowNumbersByHash.get(duplicateHash));
}
return duplicateRowNumbersByHash;
} finally {
if (beanReader != null) {
beanReader.close();
}
}
}
作为此方法的替代方法,您可以使用CsvListReader
并使用getUntokenizedRow().hashCode()
- 这将根据原始CSV字符串计算哈希值(它会更快但您的数据可能有微妙的差异意味着不起作用)。
通过2:重复报告
此方法获取前一个方法的输出,并使用它快速识别重复记录以及它复制的其他行。
/**
* Reports the details of duplicate records.
*
* @param reader
* the reader
* @param preference
* the preferences
* @param beanClass
* the bean class
* @param processors
* the cell processors
* @param duplicateRowNumbersByHash
* the row numbers of duplicate records
* @throws IOException
*/
private static void reportDuplicates(final Reader reader,
final CsvPreference preference, final Class<?> beanClass,
final CellProcessor[] processors,
final Map<Integer, Set<Integer>> duplicateRowNumbersByHash)
throws IOException {
ICsvBeanReader beanReader = null;
try {
beanReader = new CsvBeanReader(reader, preference);
final String[] header = beanReader.getHeader(true);
Object o;
while ((o = beanReader.read(beanClass, header, processors)) != null) {
final Set<Integer> duplicateRowNumbers =
duplicateRowNumbersByHash.get(o.hashCode());
if (duplicateRowNumbers != null) {
System.out.println(String.format(
"row %d is a duplicate of rows %s, line content: %s",
beanReader.getRowNumber(),
duplicateRowNumbers,
beanReader.getUntokenizedRow()));
}
}
} finally {
if (beanReader != null) {
beanReader.close();
}
}
}
<强>示例强>
以下是如何使用这两种方法的示例。
// rows (2,4,8) and (3,7) are duplicates
private static final String CSV = "a,b,c\n" + "1,two,01/02/2013\n"
+ "2,two,01/02/2013\n" + "1,two,01/02/2013\n"
+ "3,three,01/02/2013\n" + "4,four,01/02/2013\n"
+ "2,two,01/02/2013\n" + "1,two,01/02/2013\n";
private static final CellProcessor[] PROCESSORS = { new ParseInt(),
new NotNull(), new ParseDate("dd/MM/yyyy") };
public static void main(String[] args) throws IOException {
final Map<Integer, Set<Integer>> duplicateRowNumbersByHash = findDuplicates(
new StringReader(CSV), CsvPreference.STANDARD_PREFERENCE,
Bean.class, PROCESSORS);
reportDuplicates(new StringReader(CSV),
CsvPreference.STANDARD_PREFERENCE, Bean.class, PROCESSORS,
duplicateRowNumbersByHash);
}
输出:
row 2 is a duplicate of rows [2, 4, 8], line content: 1,two,01/02/2013
row 3 is a duplicate of rows [3, 7], line content: 2,two,01/02/2013
row 4 is a duplicate of rows [2, 4, 8], line content: 1,two,01/02/2013
row 7 is a duplicate of rows [3, 7], line content: 2,two,01/02/2013
row 8 is a duplicate of rows [2, 4, 8], line content: 1,two,01/02/2013