使用scrapy,如何使xpath更具选择性?

时间:2013-06-11 02:40:35

标签: python html xpath scrapy

我正在使用Scrapy获得article

>>> articletext = hxs.select("//span[@id='articleText']")
>>> for p in articletext.select('.//p'):
...     print p.extract()
...
<p class="byline">By Patricia Reaney</p>
<p>
        <span class="location">NEW YORK</span> |
        <span class="timestamp">Tue Apr 3, 2012 6:19am EDT</span>
</p>
<p><span class="articleLocation">NEW YORK</span> (Reuters) - Ba
track of finances, shopping and searching for jobs are the mai
et users around the globe, according to a new international 
survey.</p>
<p>Nearly 60 percent of people in 24 countries used the web to
account and other financial assets in the past 90 days, making
ar use of the Internet.</p>
<p>Shopping was not far behind at 48 percent, the Ipsos poll fo
 and 41 percent went online in search of a job.</p>
<p>"It is easy. You can do it any time of the day and most of t
on't have fees," said Keren Gottfried, research manager for Ips
Affairs, about banking online.</p>

我希望删除署名,时间戳和文章位置,只留下文章。或者甚至更好,只提取文章。 我怎样才能做到这一点?

2 个答案:

答案 0 :(得分:0)

你可以试试这个

articletext = hxs.select("//span[@id='articleText']/p[position()>2]")

哪个应返回这3个<p>标记:

<p><span class="articleLocation">NEW YORK</span> (Reuters) - Ba
track of finances, shopping and searching for jobs are the mai
et users around the globe, according to a new international 
survey.</p>

<p>Nearly 60 percent of people in 24 countries used the web to
account and other financial assets in the past 90 days, making
ar use of the Internet.</p>

<p>Shopping was not far behind at 48 percent, the Ipsos poll fo
 and 41 percent went online in search of a job.</p>
<p>"It is easy. You can do it any time of the day and most of t
on't have fees," said Keren Gottfried, research manager for Ips
Affairs, about banking online.</p>

但您可能必须在此之后手动删除articleLocation。

答案 1 :(得分:0)

好吧,你可以添加条件来避免那些<p>。试试这样:

//span[@id="articletext"]//p[not(@class)][not(span[@class])]

意思是“所有没有类的P元素,也没有子类的SPAN元素”。您可以使用多个条件进行过滤:)