我正在尝试使用scrapy抓取this网站。页面结构如下所示:
<div class="list">
<a id="follows" name="follows"></a>
<h4 class="li_group">Follows</h4>
<div class="soda odd"><a href="...">Star Trek</a></div>
<div class="soda even"><a href="...</a></div>
<div class="soda odd"><a href="..">Star Trek: The Motion Picture</a></div>
<div class="soda even"><a href="..">Star Trek II: The Wrath of Khan</a></div>
<div class="soda odd"><a href="..">Star Trek III: The Search for Spock</a></div>
<div class="soda even"><a href="..">Star Trek IV: The Voyage Home</a></div>
<a id="followed_by" name="followed_by"></a>
<h4 class="li_group">Followed by</h4>
<div class="soda odd"><a href="..">Star Trek V: The Final Frontier</a></div>
<div class="soda even"><a href="..">Star Trek VI: The Undiscovered Country</a></div>
<div class="soda odd"><a href="..">Star Trek: Deep Space Nine</a></div>
<div class="soda even"><a href="..">Star Trek: Generations</a></div>
<div class="soda odd"><a href="..">Star Trek: Voyager</a></div>
<div class="soda even"><a href="..">First Contact</a></div>
<a id="spin_off" name="spin_off"></a>
<h4 class="li_group">Spin-off</h4>
<div class="soda odd"><a href="..">Star Trek: The Next Generation - The Transinium Challenge</a></div>
<div class="soda even"><a href="..">A Night with Troi</a></div>
<div class="soda odd"><a href="..">Star Trek: Deep Space Nine</a></div
</div>
我想在<h4 class="li_group">Follows</h4>
和<h4 class="li_group">Followed by</h4>
之间选择和提取文字,然后在<h4 class="li_group">Followed by</h4>
和<h4 class="li_group">Spin-off</h4>
之间提交文字。
我试过这段代码:
def parse(self, response):
for sel in response.css("div.list"):
item = ImdbcoItem()
item['Follows'] = sel.css("a#follows+h4.li_group ~ div a::text").extract(),
item['Followed_by'] = sel.css("a#vfollowed_by+h4.li_group ~ div a::text").extract(),
item['Spin_off'] = sel.css("a#spin_off+h4.li_group ~ div a::text").extract(),
return item
但是,第一项提取所有div不仅仅是<h4 class="li_group">Follows</h4>
和<h4 class="li_group">Followed by</h4>
之间的div
任何帮助都会对您有所帮助!!
答案 0 :(得分:3)
您可以尝试使用以下XPath表达式来获取
“Follows”块的所有文本节点:
//div[./preceding-sibling::h4[1]="Follows"]//text()
“跟随”块的所有文本节点:
//div[./preceding-sibling::h4[1]="Followed by"]//text()
“Spin off”块的所有文本节点:
//div[./preceding-sibling::h4[1]="Spin-off"]//text()
答案 1 :(得分:2)
我喜欢用于这些案例的提取模式是:
h4
元素)following-sibling
轴,就像@ Andersson的回答一样,在下一个边界之前获取元素,这将是循环:
$ scrapy shell 'http://www.imdb.com/title/tt0092455/trivia?tab=mc&ref_=tt_trv_cnn'
(...)
>>> for cnt, h4 in enumerate(response.css('div.list > h4.li_group'), start=1):
... print(cnt, h4.xpath('normalize-space()').get())
...
1 Follows
2 Followed by
3 Edited into
4 Spun-off from
5 Spin-off
6 Referenced in
7 Featured in
8 Spoofed in
这是使用枚举来获取边界之间元素的一个示例(请注意,这在表达式中使用带有$cnt
的XPath变量并在cnt=cnt
中传递.xpath()
):
>>> for cnt, h4 in enumerate(response.css('div.list > h4.li_group'), start=1):
... print(cnt, h4.xpath('normalize-space()').get())
... print(h4.xpath('following-sibling::div[count(preceding-sibling::h4)=$cnt]',
cnt=cnt).xpath(
'string(.//a)').getall())
...
1 Follows
['Star Trek', 'Star Trek: The Animated Series', 'Star Trek: The Motion Picture', 'Star Trek II: The Wrath of Khan', 'Star Trek III: The Search for Spock', 'Star Trek IV: The Voyage Home']
2 Followed by
['Star Trek V: The Final Frontier', 'Star Trek VI: The Undiscovered Country', 'Star Trek: Deep Space Nine', 'Star Trek: Generations', 'Star Trek: Voyager', 'First Contact', 'Star Trek: Insurrection', 'Star Trek: Enterprise', 'Star Trek: Nemesis', 'Star Trek', 'Star Trek Into Darkness', 'Star Trek Beyond', 'Star Trek: Discovery', 'Untitled Star Trek Sequel']
3 Edited into
['Reading Rainbow: The Bionic Bunny Show', 'The Unauthorized Hagiography of Vincent Price']
4 Spun-off from
['Star Trek']
5 Spin-off
['Star Trek: The Next Generation - The Transinium Challenge', 'A Night with Troi', 'Star Trek: Deep Space Nine', "Star Trek: The Next Generation - Future's Past", 'Star Trek: The Next Generation - A Final Unity', 'Star Trek: The Next Generation: Interactive VCR Board Game - A Klingon Challenge', 'Star Trek: Borg', 'Star Trek: Klingon', 'Star Trek: The Experience - The Klingon Encounter']
6 Referenced in
(...)
以下是如何使用它来填充和项目(这里,我使用简单的dict只是为了说明):
>>> item = {}
>>> for cnt, h4 in enumerate(response.css('div.list > h4.li_group'), start=1):
... key = h4.xpath('normalize-space()').get().strip() # there are some non-breaking spaces
... if key in ['Follows', 'Followed by', 'Spin-off']:
... values = h4.xpath('following-sibling::div[count(preceding-sibling::h4)=$cnt]',
... cnt=cnt).xpath(
... 'string(.//a)').getall()
... item[key] = values
...
>>> from pprint import pprint
>>> pprint(item)
{'Followed by': ['Star Trek V: The Final Frontier',
'Star Trek VI: The Undiscovered Country',
'Star Trek: Deep Space Nine',
'Star Trek: Generations',
'Star Trek: Voyager',
'First Contact',
'Star Trek: Insurrection',
'Star Trek: Enterprise',
'Star Trek: Nemesis',
'Star Trek',
'Star Trek Into Darkness',
'Star Trek Beyond',
'Star Trek: Discovery',
'Untitled Star Trek Sequel'],
'Follows': ['Star Trek',
'Star Trek: The Animated Series',
'Star Trek: The Motion Picture',
'Star Trek II: The Wrath of Khan',
'Star Trek III: The Search for Spock',
'Star Trek IV: The Voyage Home'],
'Spin-off': ['Star Trek: The Next Generation - The Transinium Challenge',
'A Night with Troi',
'Star Trek: Deep Space Nine',
"Star Trek: The Next Generation - Future's Past",
'Star Trek: The Next Generation - A Final Unity',
'Star Trek: The Next Generation: Interactive VCR Board Game - A '
'Klingon Challenge',
'Star Trek: Borg',
'Star Trek: Klingon',
'Star Trek: The Experience - The Klingon Encounter']}
>>>