我可以将网页的源码从curl传输到perl吗?

时间:2011-12-06 14:01:24

标签: perl curl

我正在解析许多网站的源代码,这是一个包含数千页的整个大型网站。现在我想搜索perĺ中的内容,我想查找关键字的出现次数。

为了解析网页,我使用curl并将输出管道输出到“grep -c”,这不起作用,所以我想使用perl。可以将perl完全用于抓取页面吗?

E.g。

cat RawJSpiderOutput.txt | grep parsed | awk -F " " '{print $2}' | xargs -I replaceStr curl replaceStr?myPara=en | perl -lne '$c++while/myKeywordToSearchFor/g;END{print$c}' 

说明:在上面的文本文件中,我有可用和不可用的URL。使用“Grep parsed”我获取可用的URL。使用awk我选择包含纯可用URL的第二列。到现在为止还挺好。现在回答这个问题:使用Curl我获取源代码(也附加一些参数)并将每个页面的整个源代码传递给perl,以便计算“myKeywordToSearchFor”的出现次数。我希望只有在可能的情况下才能在perl中执行此操作。

谢谢!

2 个答案:

答案 0 :(得分:3)

仅使用Perl(未经测试):

use strict;
use warnings;

use File::Fetch;

my $count;
open my $SPIDER, '<', 'RawJSpiderOutput.txt' or die $!;
while (<$SPIDER>) {
    chomp;
    if (/parsed/) {
        my $url = (split)[1];
        $url .= '?myPara=en';
        my $ff = File::Fetch->new(uri => $url);
        $ff->fetch or die $ff->error;
        my $fetched = $ff->output_file;
        open my $FETCHED, '<', $fetched or die $!;
        while (<$FETCHED>) {
            $count++ if /myKeyword/;
        }
        unlink $fetched;
    }
}
print "$count\n";

答案 1 :(得分:0)

尝试更多类似的内容,

   perl -e 'while(<>){my @words = split ' ';for my $word(@words){if(/myKeyword/){++$c}}} print "$c\n"'

   while (<>)               # as long as we're getting input (into “$_”)
   { my @words = split ' '; # split $_ (implicit) into whitespace, so we examine each word
     for my $word (@words)  #  (and don't miss two keywords on one line)
     { if (/myKeyword/)     # whenever it's found,
       { ++$c } } }         # increment the counter (auto-vivified)
   print "$c\n"             # and after end of file is reached, print the counter

或拼写strict - 如

   use strict;
   my $count = 0;
   while (my $line = <STDIN>) # except that <> is actually more magical than this
   { my @words = split ' ' => $line;
     for my $word (@words)
     { ++$count; } } }
   print "$count\n";