If I run ls folder | head
in a directory with a lot of files, the execution time is about 50 times faster than ls folder | tail
. Does the head
command stops ls
from executing wholy when it has enough (10) lines?
I couldn't find the answer to this anywhere because "pipe to head" gives me tons of unrelated results on Google or here.
If the answer is no, then is there a more efficient way to list only some files instead of running ls
completely and cutting the output with head
?
答案 0 :(得分:12)
Once head
has read enough lines from it's stdin
(stdout
of ls
), then it exits and therefore closes the stdin
. When that happens, the pipe is considered to be broken and ls
receives a SIGPIPE
signal. As a reaction to that it terminates as well, way before processing everything that it would normally do.
In the case of tail
, it has to wait for ls
to terminate to know which X number of lines were the last ones.
答案 1 :(得分:0)
This would be more efficient as no new processes are created:
a=( folder/* )
echo ${a[@]:0:9}
In same cases it may be even better as you don't have to prepend folder/
if you want to do something with the result.