Files.walkFileTree的并行版本(java或scala)

时间:2013-07-18 19:59:53

标签: java multithreading scala io file-processing

有没有人知道java Files.walkFileTree或类似东西的任何并行等价物?它可以是Java或Scala库。

3 个答案:

答案 0 :(得分:8)

正如其他人所指出的那样,遍历文件树几乎肯定是IO绑定而不是CPU绑定,因此执行多线程文件树遍历的好处是值得怀疑的。但是,如果你真的想要,你可以用ForkJoinPool或类似的方式推出自己的。

import java.io.IOException;
import java.nio.file.FileVisitResult;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.SimpleFileVisitor;
import java.nio.file.attribute.BasicFileAttributes;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveAction;

public class MultiThreadedFileTreeWalk {
    private static class RecursiveWalk extends RecursiveAction {
        private static final long serialVersionUID = 6913234076030245489L;
        private final Path dir;

        public RecursiveWalk(Path dir) {
            this.dir = dir;
        }

        @Override
        protected void compute() {
            final List<RecursiveWalk> walks = new ArrayList<>();
            try {
                Files.walkFileTree(dir, new SimpleFileVisitor<Path>() {
                    @Override
                    public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
                        if (!dir.equals(RecursiveWalk.this.dir)) {
                            RecursiveWalk w = new RecursiveWalk(dir);
                            w.fork();
                            walks.add(w);

                            return FileVisitResult.SKIP_SUBTREE;
                        } else {
                            return FileVisitResult.CONTINUE;
                        }
                    }

                    @Override
                    public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
                        System.out.println(file + "\t" + Thread.currentThread());
                        return FileVisitResult.CONTINUE;
                    }
                });
            } catch (IOException e) {
                e.printStackTrace();
            }

            for (RecursiveWalk w : walks) {
                w.join();
            }
        }
    }

    public static void main(String[] args) throws IOException {
        RecursiveWalk w = new RecursiveWalk(Paths.get(".").toRealPath());
        ForkJoinPool p = new ForkJoinPool();
        p.invoke(w);
    }
}

此示例在单独的线程上遍历每个目录。这是Java 7的fork/join库的教程。

答案 1 :(得分:4)

此练习既不像Scala的答案那样简短,也不像Java答案那样简单。

这里的想法是用每个设备的线程开始并行遍历。

步行者在ForkJoinPool线程上,所以当他们为每个路径测试启动未来时,这些是池上的分叉任务。目录测试在读取目录时使用托管阻塞,查找文件。

根据未来路径测试完成一个承诺返回结果。 (这里没有检测空手完成的机制。)

更有趣的测试包括读取zip文件,因为解压缩会占用一些CPU。

我想知道paulp will do something clever with deep listing

import util._
import collection.JavaConverters._
import concurrent.{ TimeoutException => Timeout, _ }
import concurrent.duration._
import ExecutionContext.Implicits._
import java.io.IOException
import java.nio.file.{ FileVisitResult => Result, _ }
import Result.{ CONTINUE => Go, SKIP_SUBTREE => Prune, TERMINATE => Stop }
import java.nio.file.attribute.{ BasicFileAttributes => BFA }

object Test extends App {
  val fileSystem = FileSystems.getDefault
  val starts = (if (args.nonEmpty) args.toList else mounts) map (s => (fileSystem getPath s))
  val p = Promise[(Path, BFA)]

  def pathTest(path: Path, attrs: BFA) =
    if (attrs.isDirectory ) {
      val entries = blocking {
        val res = Files newDirectoryStream path
        try res.asScala.toList finally res.close()
      }
      List("hello","world") forall (n => entries exists (_.getFileName.toString == n))
    } else {
      path.getFileName.toString == "enough"
    }

  def visitor(root: Path) = new SimpleFileVisitor[Path] {
    def stopOrGo = if (p.isCompleted) Stop else Go
    def visiting(path: Path, attrs: BFA) = {
      future { pathTest(path, attrs) } onComplete {
        case Success(true) => p trySuccess (path, attrs)
        case Failure(e)    => p tryFailure e
        case _             =>
      }
      stopOrGo
    }
    override def preVisitDirectory(dir: Path, attrs: BFA) = (
      if ((starts contains dir) && dir != root) Prune
      else visiting(dir, attrs)
    )
    override def postVisitDirectory(dir: Path, e: IOException) = {
      if (e != null) p tryFailure e
      stopOrGo
    }
    override def visitFile(file: Path, attrs: BFA) = visiting(file, attrs)
  }
  //def walk(p: Path): Path = Files walkFileTree (p, Set().asJava, 10, visitor(p))
  def walk(p: Path): Path = Files walkFileTree (p, visitor(p))

  def show(store: FileStore) = {
    val ttl   = store.getTotalSpace / 1024
    val used  = (store.getTotalSpace - store.getUnallocatedSpace) / 1024
    val avail = store.getUsableSpace / 1024
    Console println f"$store%-40s $ttl%12d $used%12d $avail%12d"
    store
  }
  def mounts = {
    val devs = for {
      store <- fileSystem.getFileStores.asScala
      if store.name startsWith "/dev/"
      if List("ext4","fuseblk") contains store.`type`
    } yield show(store)
    val devstr = """(\S+) \((.*)\)""".r
    (devs.toList map (_.toString match {
      case devstr(name, dev) if devs.toList exists (_.name == dev) => Some(name)
      case s => Console println s"Bad dev str '$s', skipping" ; None
    })).flatten
  }

  starts foreach (f => future (walk(f)))

  Try (Await result (p.future, 20.seconds)) match {
    case Success((name, attrs)) => Console println s"Result: ${if (attrs.isDirectory) "dir" else "file"} $name"
    case Failure(e: Timeout)    => Console println s"No result: timed out."
    case Failure(t)             => Console println s"No result: $t."
  }
}

答案 2 :(得分:3)

我们假设在每个文件上执行回调就足够了。

此代码将不会处理文件系统中的循环 - 您需要一个注册表(例如java.util.concurrent.ConcurrentHashMap)。您可以添加各种改进,例如报告异常而不是默默地忽略它们。

import java.io.File
import scala.util._
def walk(f: File, callback: File => Unit, pick: File => Boolean = _ => true) {
  Try {
    val (dirs, fs) = f.listFiles.partition(_.isDirectory)
    fs.filter(pick).foreach(callback)
    dirs.par.foreach(f => walk(f, callback, pick))
  }
}

使用折叠而不是foreach收集文件并不是非常困难,但我将其作为练习留给读者。 (A ConcurrentLinkedQueue可能足够快,无论如何都要在回调中接受它们,除非你的线程非常慢并且文件系统很棒。)