我有一个单元测试,试图使用本地文件系统模拟读取S3存储桶。为此,我利用Files.walkFileTree
仅将某些记录添加到列表中。
这是要走的文件夹,稍后我将从.gz
文件中提取数据。
$ ls -l /var/folders/8g/f_n563nx5yv9mdpnznnxv8gj1xs_mm/T/s3FilesReaderTest1892987110875929052/prefix/2016-01-01/ | cut -d' ' -f8-
41 Dec 19 18:38 topic-00000-000000000000.gz
144 Dec 19 18:38 topic-00000-000000000000.index.json
48 Dec 19 18:38 topic-00001-000000000000.gz
144 Dec 19 18:38 topic-00001-000000000000.index.json
这是模拟方法
final AmazonS3 client = mock(AmazonS3Client.class);
when(client.listObjects(any(ListObjectsRequest.class))).thenAnswer(new Answer<ObjectListing>() {
private String key(File file) {
return file.getAbsolutePath().substring(dir.toAbsolutePath().toString().length() + 1);
}
@Override
public ObjectListing answer(InvocationOnMock invocationOnMock) throws Throwable {
final ListObjectsRequest req = (ListObjectsRequest) invocationOnMock.getArguments()[0];
final String bucket = req.getBucketName();
final String marker = req.getMarker();
final String prefix = req.getPrefix();
logger.debug("prefix = {}; marker = {}", prefix, marker);
final List<File> files = new ArrayList<>();
Path toWalk = dir;
if (prefix != null) {
toWalk = Paths.get(dir.toAbsolutePath().toString(), prefix).toAbsolutePath();
}
logger.debug("walking\t{}", toWalk);
Files.walkFileTree(toWalk, new SimpleFileVisitor<Path>() {
@Override
public FileVisitResult preVisitDirectory(Path toCheck, BasicFileAttributes attrs) throws IOException {
if (toCheck.startsWith(dir)) {
logger.debug("visiting\t{}", toCheck);
return FileVisitResult.CONTINUE;
}
logger.debug("skipping\t{}", toCheck);
return FileVisitResult.SKIP_SUBTREE;
}
@Override
public FileVisitResult visitFile(Path path, BasicFileAttributes attrs) throws IOException {
File f = path.toFile();
String key = key(f);
if (marker == null || key.compareTo(marker) > 0) {
logger.debug("adding\t{}", f);
files.add(f);
}
return FileVisitResult.CONTINUE;
}
});
ObjectListing listing = new ObjectListing();
List<S3ObjectSummary> summaries = new ArrayList<>();
Integer maxKeys = req.getMaxKeys();
for (int i = 0; i < maxKeys && i < files.size(); i++) {
String key = key(files.get(i));
S3ObjectSummary summary = new S3ObjectSummary();
summary.setKey(key);
logger.debug("adding summary for {}", key);
summaries.add(summary);
listing.setNextMarker(key);
}
listing.setMaxKeys(maxKeys);
listing.getObjectSummaries().addAll(summaries);
listing.setTruncated(files.size() > maxKeys);
return listing;
}
});
和日志输出
2018-12-19 18:38:13.469 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - prefix = prefix; marker = prefix/2016-01-01
2018-12-19 18:38:13.470 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - walking /var/folders/8g/f_n563nx5yv9mdpnznnxv8gj1xs_mm/T/s3FilesReaderTest1892987110875929052/prefix
2018-12-19 18:38:13.475 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - visiting /var/folders/8g/f_n563nx5yv9mdpnznnxv8gj1xs_mm/T/s3FilesReaderTest1892987110875929052/prefix
2018-12-19 18:38:13.476 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - visiting /var/folders/8g/f_n563nx5yv9mdpnznnxv8gj1xs_mm/T/s3FilesReaderTest1892987110875929052/prefix/2016-01-01
2018-12-19 18:38:13.477 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - adding /var/folders/8g/f_n563nx5yv9mdpnznnxv8gj1xs_mm/T/s3FilesReaderTest1892987110875929052/prefix/2016-01-01/topic-00000-000000000000.index.json
2018-12-19 18:38:13.477 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - adding /var/folders/8g/f_n563nx5yv9mdpnznnxv8gj1xs_mm/T/s3FilesReaderTest1892987110875929052/prefix/2016-01-01/topic-00001-000000000000.index.json
2018-12-19 18:38:13.477 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - adding /var/folders/8g/f_n563nx5yv9mdpnznnxv8gj1xs_mm/T/s3FilesReaderTest1892987110875929052/prefix/2016-01-01/topic-00001-000000000000.gz
2018-12-19 18:38:13.477 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - adding /var/folders/8g/f_n563nx5yv9mdpnznnxv8gj1xs_mm/T/s3FilesReaderTest1892987110875929052/prefix/2016-01-01/topic-00000-000000000000.gz
2018-12-19 18:38:13.479 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - adding summary for prefix/2016-01-01/topic-00000-000000000000.index.json
2018-12-19 18:38:13.479 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - adding summary for prefix/2016-01-01/topic-00001-000000000000.index.json
2018-12-19 18:38:13.479 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - adding summary for prefix/2016-01-01/topic-00001-000000000000.gz
2018-12-19 18:38:13.479 [main] DEBUG c.s.k.connect.s3.S3FilesReaderTest - adding summary for prefix/2016-01-01/topic-00000-000000000000.gz
2018-12-19 18:38:13.481 [main] DEBUG c.s.k.c.s3.source.S3FilesReader - aws ls bucket/prefix after:prefix/2016-01-01 = [prefix/2016-01-01/topic-00000-000000000000.index.json, prefix/2016-01-01/topic-00001-000000000000.index.json, prefix/2016-01-01/topic-00001-000000000000.gz, prefix/2016-01-01/topic-00000-000000000000.gz]
2018-12-19 18:38:13.481 [main] DEBUG c.s.k.c.s3.source.S3FilesReader - Skipping non-data chunk prefix/2016-01-01/topic-00000-000000000000.index.json
2018-12-19 18:38:13.481 [main] DEBUG c.s.k.c.s3.source.S3FilesReader - Skipping non-data chunk prefix/2016-01-01/topic-00001-000000000000.index.json
2018-12-19 18:38:13.484 [main] DEBUG c.s.k.c.s3.source.S3FilesReader - Adding chunk-key prefix/2016-01-01/topic-00001-000000000000.gz
2018-12-19 18:38:13.484 [main] DEBUG c.s.k.c.s3.source.S3FilesReader - Adding chunk-key prefix/2016-01-01/topic-00000-000000000000.gz
2018-12-19 18:38:13.485 [main] DEBUG c.s.k.c.s3.source.S3FilesReader - Next Chunks: [prefix/2016-01-01/topic-00001-000000000000.gz, prefix/2016-01-01/topic-00000-000000000000.gz]
2018-12-19 18:38:13.485 [main] DEBUG c.s.k.c.s3.source.S3FilesReader - Now reading from prefix/2016-01-01/topic-00001-000000000000.gz
2018-12-19 18:38:13.513 [main] DEBUG c.s.k.c.s3.source.S3FilesReader - Now reading from prefix/2016-01-01/topic-00000-000000000000.gz
所有文件都被正确读取(key0为1值,key1为2),但是我的单元测试期望它们以升序读取。
以prefix/2016-01-01/topic-00000
开头的所有文件都应在prefix/2016-01-01/topic-00001
之前读取,特别是adding summary
行
java.lang.AssertionError:
Expected :[key0-0=value0-0, key1-0=value1-0, key1-1=value1-1]
Actual :[key1-0=value1-0, key1-1=value1-1, key0-0=value0-0]
除了插入排序的集合而不是常规列表之外,还有什么其他选项可以满足该条件,以便按常规ls
操作在单个文件夹上给出的顺序读取文件?
答案 0 :(得分:0)
现在,对每个文件夹使用TreeSet
来解决此问题,并在扫描文件夹之前和之后清除。
Path toWalk = dir;
if (prefix != null) { // Prefix is some path after the parent dir. It's an S3 concept
toWalk = Paths.get(dir.toAbsolutePath().toString(), prefix).toAbsolutePath();
}
// Absolute paths should be sorted lexicograhically for all files
final Set<File> files = new TreeSet<>(Comparator.comparing(File::getAbsolutePath));
Files.walkFileTree(toWalk, new SimpleFileVisitor<Path>() {
// Absolute paths should be sorted lexicograhically for files in folders
private Set<File> accumulator = new TreeSet<>(Comparator.comparing(File::getAbsolutePath));
@Override
public FileVisitResult preVisitDirectory(Path toCheck, BasicFileAttributes attrs) throws IOException {
accumulator.clear(); // Start fresh
if (toCheck.startsWith(dir)) {
logger.debug("visiting\t{}", toCheck);
return FileVisitResult.CONTINUE;
}
logger.debug("skipping\t{}", toCheck);
return FileVisitResult.SKIP_SUBTREE;
}
@Override
public FileVisitResult visitFile(Path path, BasicFileAttributes attrs) throws IOException {
File f = path.toFile();
String key = key(f);
if (marker == null || key.compareTo(marker) > 0) {
logger.debug("adding\t{}", f);
accumulator.add(f); // accumulate
}
return FileVisitResult.CONTINUE;
}
@Override
public FileVisitResult postVisitDirectory(Path dir, IOException e) throws IOException {
files.addAll(accumulator); // dump results (already sorted)
accumulator.clear(); // start fresh
return super.postVisitDirectory(dir, e);
}
});
答案 1 :(得分:0)
一种选择是使用流:
try (Stream<Path> tree = Files.walk(toWalk)) {
tree.filter(p -> !Files.isDirectory(p) && p.startsWith(dir)).sorted()
.forEachOrdered(path -> {
File f = path.toFile();
String key = key(f);
// etc.
});
}