如何使用Perl :: Mechanize的解析器迭代300页?

时间:2018-02-16 09:13:55

标签: perl parsing mechanize

我编写了一个从页面中提取数据的小解析器。

use strict; 
use warnings FATAL => qw#all#; 
use LWP::UserAgent; 
use HTML::TreeBuilder::XPath; 
use Data::Dumper; 

my $handler_relurl      = sub { q#https://europa.eu# . $_[0] }; 
my $handler_trim        = sub { $_[0] =~ s#^\s*(.+?)\s*$#$1#r }; 
my $handler_val         = sub { $_[0] =~ s#^[^:]+:\s*##r }; 
my $handler_split       = sub { [ split $_[0], $_[1] ] }; 
my $handler_split_colon = sub { $handler_split->( qr#; #, $_[0] ) }; 
my $handler_split_comma = sub { $handler_split->( qr#, #, $_[0] ) }; 

my $conf = 
{ 
    url      => q#https://europa.eu/youth/volunteering/evs-organisation_en#, 
    parent   => q#//div[@class="vp ey_block block-is-flex"]#, 
    children => 
    { 
        internal_url => [ q#//a/@href#, [ $handler_relurl ] ], 
        external_url => [ q#//i[@class="fa fa-external-link fa-lg"]/parent::p//a/@href#, [ $handler_trim ] ], 
        title        => [ q#//h4# ], 
        topics       => [ q#//div[@class="org_cord"]#, [ $handler_val, $handler_split_colon ] ], 
        location     => [ q#//i[@class="fa fa-location-arrow fa-lg"]/parent::p#, [ $handler_trim ] ], 
        hand         => [ q#//i[@class="fa fa-hand-o-right fa-lg"]/parent::p#, [ $handler_trim, $handler_split_comma ] ], 
        pic_number   => [ q#//p[contains(.,'PIC no')]#, [ $handler_val ] ], 
    } 
}; 

print Dumper browse( $conf ); 

sub browse 
{ 
    my $conf = shift; 

    my $ref = [ ]; 

    my $lwp_useragent = LWP::UserAgent->new( agent => q#IE 6#, timeout => 10 ); 
    my $response = $lwp_useragent->get( $conf->{url} ); 
    die $response->status_line unless $response->is_success; 
    my $content = $response->decoded_content; 

    my $html_treebuilder_xpath = HTML::TreeBuilder::XPath->new_from_content( $content ); 
    my @nodes = $html_treebuilder_xpath->findnodes( $conf->{parent} ); 
    for my $node ( @nodes ) 
    { 
        push @$ref, { };  

        while ( my ( $key, $val ) = each %{$conf->{children}} ) 
        { 
            my $xpath    = $val->[0]; 
            my $handlers = $val->[1] // [ ]; 

            $val = ($node->findvalues( qq#.$xpath# ))[0] // next; 
            $val = $_->( $val ) for @$handlers; 
            $ref->[-1]->{$key} = $val; 
        } 
    } 

    return $ref; 
}
乍一看关于从页面到页面的抓取问题 - 可以通过不同的方法解决:

我们在页面底部有分页:例如:

http://europa.eu/youth/volunteering/evs-organisation_en?country=&topic=&field_eyp_vp_accreditation_type=All&town=&name=&pic=&eiref=&inclusion_topic=&field_eyp_vp_feweropp_additional_mentoring_1=&field_eyp_vp_feweropp_additional_physical_environment_1=&field_eyp_vp_feweropp_additional_other_support_1=&field_eyp_vp_feweropp_other_support_text=&&page=5

http://europa.eu/youth/volunteering/evs-organisation_en?country=&topic=&field_eyp_vp_accreditation_type=All&town=&name=&pic=&eiref=&inclusion_topic=&field_eyp_vp_feweropp_additional_mentoring_1=&field_eyp_vp_feweropp_additional_physical_environment_1=&field_eyp_vp_feweropp_additional_other_support_1=&field_eyp_vp_feweropp_other_support_text=&&page=6

http://europa.eu/youth/volunteering/evs-organisation_en?country=&topic=&field_eyp_vp_accreditation_type=All&town=&name=&pic=&eiref=&inclusion_topic=&field_eyp_vp_feweropp_additional_mentoring_1=&field_eyp_vp_feweropp_additional_physical_environment_1=&field_eyp_vp_feweropp_additional_other_support_1=&field_eyp_vp_feweropp_other_support_text=&&page=7

我们可以将此网址设置为基础 -

如果我们有一个数组,我们从中加载需要访问的网址 - 我们会遇到所有网页......

注意:我们有超过6000个结果 - 并且在每个页面上21个代表一条记录的小条目:所以我们有大约305个页面我们必须访问。 我们可以递增页面(如上所示)并计数到305

对总页数进行硬编码并不实用,因为它可能会有所不同。我们可以: - 从第一页中提取结果的数量,将其除以每页的结果(21)并将其向下舍入。 - 从" last"中提取网址。链接在页面底部,创建一个URI对象并从查询字符串中读取页码。

现在我想我必须遍历所有页面。

my $url_pattern = 'https://europa.eu/youth/volunteering/evs-organisation_en&page=%s'; 

for my $page ( 0 .. $last ) 
{ 
    my $url = sprintf $url_pattern, $page; 

    ... 
}

或者我尝试将分页合并到$ conf中,也许是一个迭代器,在每次调用时都会获取下一个节点......

1 个答案:

答案 0 :(得分:2)

解析每个页面后,检查底部是否存在 next › 链接。当您到达第292页时,没有更多页面,所以您已完成并可以退出循环,例如last