我正在使用Scrapy编写一个网络爬虫来下载某个网页上的对话文本。
以下是网页背后代码的相关部分,用于特定的对讲:
<div id="site_comment_71339" class="site_comment site_comment-even large high-rank">
<div class="talkback-topic">
<a class="show-comment" data-ajax-url="/comments/71339.js?counter=97&num=57" href="/comments/71339?counter=97&num=57">57. talk back title here </a>
</div>
<div class="talkback-message"> blah blah blah talk-back message here </div>
....etc etc etc ......
编写XPath以获取消息:
titles = hxs.xpath("//div[@class='site_comment site_comment-even large high-rank']")
以及后来:
item["title"] = titles.xpath("div[@class='talkback-message']text()").extract()
没有错误,但它不起作用。有什么想法吗?我想我没有正确地编写路径,但我找不到错误。
谢谢:)
整个代码:
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from craigslist_sample.items import CraigslistSampleItem
class MySpider(BaseSpider):
name = "craig"
allowed_domains = ["tbk.co.il"]
start_urls = ["http://www.tbk.co.il/tag/%D7%91%D7%A0%D7%99%D7%9E%D7%99%D7%9F_%D7%A0%D7%AA%D7%A0%D7%99%D7%94%D7%95/talkbacks"]
def parse(self, response):
hxs = Selector(response)
titles = hxs.xpath("//div[@class='site_comment site_comment-even large high-rank']")
items=[]
for titles in titles:
item = CraigslistSampleItem()
item["title"] = titles.xpath("div[@class='talkback-message']text()").extract()
items.append(item)
return items
答案 0 :(得分:6)
以下是#site_comment_74240
<div class="site_comment site_comment-even small normal-rank" id="site_comment_74240">
<div class="talkback-topic">
<a href="/comments/74240?counter=1&num=144" class="show-comment" data-ajax-url="/comments/74240.js?counter=1&num=144">144. מדיניות</a>
</div>
<div class="talkback-username">
<table><tr>
<td>קייזרמן פרדי </td>
<td>(01.11.2013)</td>
</tr></table>
</div>
当您第一次获取它时,“talkback-message”div
不在HTML页面中,而是在您单击注释标题时通过某些AJAX查询异步获取,因此您必须获取每个评论都有。
代码片段中的注释块titles
可以使用这样的XPath抓取://div[starts-with(@id, "site_comment_"])
,即所有div
s具有以字符串开头的“id”属性“ “site_comment _”
您还可以将CSS选择器与Selector.css()
一起使用。在您的情况下,您可以使用“id”方法获取注释块(正如我上面使用XPath所做的那样),所以:
titles = sel.css("div[id^=site_comment_]")
或使用“site_comment”类,而其他“site_comment-even”,“site_comment-odd”,“small”,“normal-rank”或“high-rank”不同:
titles = sel.css("div.site_comment")
然后,您将使用该评论Request
内的./div[@class="talkback-topic"]/a[@class="show-comment"]/@data-ajax-url
中的网址发布新的div
。或者使用CSS选择器div.talkback-topic > a.show-comment::attr(data-ajax-url)
(顺便说一句,::attr(...)
不是标准的,但是使用伪元素函数的CSS选择器的Scrapy扩展)
您从AJAX调用中获得的是一些Javascript代码,并且您想要获取old.after(...)
内的内容
var old = $("#site_comment_72765");
old.attr('id', old.attr('id') + '_small');
old.hide();
old.after("\n<div class=\"site_comment site_comment-odd large high-rank\" id=\"site_comment_72765\">\n <div class=\"talkback-topic\">\n <a href=\"/comments/72765?counter=42&num=109\" class=\"show-comment\" data-ajax-url=\"/comments/72765.js?counter=42&num=109\">109. ביבי - האדם הנכון בראש ממשלת ישראל(לת)<\/a>\n <\/div>\n \n <div class=\"talkback-message\">\n \n <\/div>\n \n <div class=\"talkback-username\">\n <table><tr>\n <td>ישראל <\/td>\n <td>(11.03.2012)<\/td>\n <\/tr><\/table>\n <\/div>\n <div class=\"rank-controllers\">\n <table><tr>\n \n <td class=\"rabk-link\"><a href=\"#\" data-thumb=\"/comments/72765/thumb?type=up\"><img alt=\"\" src=\"/images/elements/thumbU.png?1376839523\" /><\/a><\/td>\n <td> | <\/td>\n <td class=\"rabk-link\"><a href=\"#\" data-thumb=\"/comments/72765/thumb?type=down\"><img alt=\"\" src=\"/images/elements/thumbD.png?1376839523\" /><\/a><\/td>\n \n <td> | <\/td>\n <td>11<\/td>\n \n <\/tr><\/table>\n <\/div>\n \n <div class=\"talkback-links\">\n <a href=\"/comments/new?add_to_root=true&html_id=site_comment_72765&sibling_id=72765\">תגובה חדשה<\/a>\n \n <a href=\"/comments/72765/comments/new?html_id=site_comment_72765\">הגיבו לתגובה<\/a>\n \n <a href=\"/i/offensive?comment_id=72765\" data-noajax=\"true\">דיווח תוכן פוגעני<\/a>\n <\/div>\n \n<\/div>");
var new_comment = $("#site_comment_72765");
这是您需要使用Selector(text=this_ajax_html_data)
和.//div[@class="talkback-message"]//text()
XPath或div.talkback-message ::text
CSS选择器再次解析的HTML数据
这是一个让您了解这些想法的骷髅蜘蛛:
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from scrapy.http import Request
from craigslist_sample.items import CraigslistSampleItem
import urlparse
import re
class MySpider(BaseSpider):
name = "craig"
allowed_domains = ["tbk.co.il"]
start_urls = ["http://www.tbk.co.il/tag/%D7%91%D7%A0%D7%99%D7%9E%D7%99%D7%9F_%D7%A0%D7%AA%D7%A0%D7%99%D7%94%D7%95/talkbacks"]
def parse(self, response):
sel = Selector(response)
comments = sel.css("div.site_comment")
for comment in comments:
item = CraigslistSampleItem()
# this probably has to be fixed
#item["title"] = comment.xpath("div[@class='talkback-message']text()").extract()
# issue an additional request to fetch the Javascript
# data containing the comment text
# and pass the incomplete item via meta dict
for url in comment.css('div.talkback-topic > a.show-comment::attr(data-ajax-url)').extract():
yield Request(url=urlparse.urljoin(response.url, url),
callback=self.parse_javascript_comment,
meta={"item": item})
break
# the line we are looking for begins with "old.after"
# and we want everythin inside the parentheses
_re_comment_html = re.compile(r'^old\.after\((?P<html>.+)\);$')
def parse_javascript_comment(self, response):
item = response.meta["item"]
# loop on Javascript content lines
for line in response.body.split("\n"):
matching = self._re_comment_html.search(line.strip())
if matching:
# what's inside the parentheses is a Javascript strings
# with escaped double-quotes
# a simple way to decode that into a Python string
# is to use eval()
# then there are these "<\/tag>" we want to remove
html = eval(matching.group("html")).replace(r"<\/", "</")
# once we have the HTML snippet, decode it using Selector()
decoded = Selector(text=html, type="html")
# and save the message text in the item
item["message"] = u''.join(decoded.css('div.talkback-message ::text').extract()).strip()
# and return it
return item
您可以使用scrapy runspider tbkspider.py
进行试用。