如何使用Web :: Scraper刮取以下内容?

时间:2015-09-16 01:00:19

标签: html perl dom web-scraping scrape

此问题与How to Parse this HTML with Web::Scraper?有所不同。

我必须使用Web::Scraper抓取页面,其中HTML可能会略有变化。有时它可能是

<div>
  <p>
    <strong>TITLE1</strong>
    <br>
    DESCRIPTION1
  </p>
  <p>
    <strong>TITLE2</strong>
    <br>
    DESCRIPTION2
  </p>
  <p>
    <strong>TITLE3</strong>
    <br>
    DESCRIPTION3
  </p>
</div>

我使用以下代码Web::Scraper

my $test = scraper {
    process 'div p', 'test[]' => scraper {
        process 'p strong', 'name' => 'TEXT';
        process '//p/text()', 'desc' => [ 'TEXT', sub { s/^\s+|\s+$//g } ];
    };
};

但有时它包含以下HTML(请注意,每个标题和说明不再由<p>分隔。)

<div>
  <p>
    <strong>TITLE1</strong>
    <br>
    DESCRIPTION1
    <strong>TITLE2</strong>
    <br>
    DESCRIPTION2
    <strong>TITLE3</strong>
    <br>
    DESCRIPTION3
  </p>
</div>

如何将上面的HTML抓到

test => [
  { desc => "DESCRIPTION1 ", name => "TITLE1" },
  { desc => "DESCRIPTION2 ", name => "TITLE2" },
  { desc => "DESCRIPTION3 ", name => "TITLE3" },
]

我尝试修改上面的代码,但我无法确定用于“拆分”唯一标题和描述对的HTML。

2 个答案:

答案 0 :(得分:1)

我从来没有使用过WebScraper,但它的行为似乎已被打破或只是奇怪。

对于这两种情况,或多或少都应该使用以下XPath表达式(需要进行小的调整):

//div//strong/text()
//div//br/following-sibling::text()

将这些插入xmllint(libxml2)时:

tmp >xmllint --html --shell a.html
/ > cat /
 -------
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<div>
  <p>
    <strong>TITLE1</strong>
    <br>
    DESCRIPTION1
  </p>
  <p>
    <strong>TITLE2</strong>
    <br>
    DESCRIPTION2
  </p>
  <p>
    <strong>TITLE3</strong>
    <br>
    DESCRIPTION3
  </p>
</div>
</body></html>

/ > xpath //div//strong/text()
Object is a Node Set :
Set contains 3 nodes:
1  TEXT
    content=TITLE1
2  TEXT
    content=TITLE2
3  TEXT
    content=TITLE3
/ > xpath //div//br/following-sibling::text()
Object is a Node Set :
Set contains 3 nodes:
1  TEXT
    content=     DESCRIPTION1
2  TEXT
    content=     DESCRIPTION2
3  TEXT
    content=     DESCRIPTION3

/ > load b.html
/ > cat /
 -------
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body><div>
    <p>
    <strong>TITLE1</strong>
    <br>
    DESCRIPTION1
    <strong>TITLE2</strong>
    <br>
    DESCRIPTION2
    <strong>TITLE3</strong>
    <br>
    DESCRIPTION3
    </p>
</div></body></html>

/ > xpath //div//strong/text()
Object is a Node Set :
Set contains 3 nodes:
1  TEXT
    content=TITLE1
2  TEXT
    content=TITLE2
3  TEXT
    content=TITLE3
/ > xpath //div//br/following-sibling::text()
Object is a Node Set :
Set contains 5 nodes:
1  TEXT
    content=  DESCRIPTION1
2  TEXT
    content=
3  TEXT
    content=  DESCRIPTION2
4  TEXT
    content=
5  TEXT
    content=  DESCRIPTION3

当您将这些版本的各种版本插入WebScraper时,它们无法正常工作。

 process '//div', 'test[]' => scraper {
    process '//strong', 'name' => 'TEXT';
    process '//br/following-sibling::text()', 'desc' => 'TEXT';
  };

结果:

/tmp >for f in a b; do perl bs.pl file:///tmp/$f.html; done
{ test => [{ desc => " DESCRIPTION1 ", name => "TITLE1" }] }
{ test => [{ desc => " DESCRIPTION1 ", name => "TITLE1" }] }
process '//div', 'test[]' => scraper {
  process '//div//strong', 'name' => 'TEXT';
  process '//div//br/following-sibling::text()', 'desc' => 'TEXT';
};

结果:

/tmp >for f in a b; do perl bs.pl file:///tmp/$f.html; done
{ test => [{ desc => " DESCRIPTION1 ", name => "TITLE1" }] }
{ test => [{ desc => " DESCRIPTION1 ", name => "TITLE1" }] }

即使是最基本的案例:

  process 'div', 'test[]' => scraper {
    process 'strong', 'name' => 'TEXT';
  };

结果:

/tmp >for f in a b; do perl bs.pl file:///tmp/$f.html; done
{ test => [{ name => "TITLE1" }] }
{ test => [{ name => "TITLE1" }] }

即使你通过use Web::Scraper::LibXML -nothing!

告诉它使用libxml2

为了确保我没有疯狂,我尝试使用Ruby的Nokogiri:

 /tmp >for f in a b; do ruby -rnokogiri -rpp -e'pp Nokogiri::HTML(File.read(ARGV[0])).css("div p strong").map &:text' $f.html; done
["TITLE1", "TITLE2", "TITLE3"]
["TITLE1", "TITLE2", "TITLE3"]

缺少什么。

答案 1 :(得分:0)

我想我已经解决了。我不确定它是否是最佳方式,但似乎可以处理这两种情况。

         my $test = scraper {
         process '//div', 'test' => scraper {
            process '//div//strong//text()', 'name[]' => 'TEXT';
            process '//p/text()','desc[]' => ['TEXT', sub { s/^\s+|\s+$//g} ];

         }
      };



    my $res = $test->scrape(\$html);

    #get the names and descriptions 
    my @keys = @{$res->{test}->{name}};
    my @values = @{$res->{test}->{desc}};

    #merge two arrays into hash
    my %hash;   
    @hash{@keys} = @values;