我试图在http://www.head-fi.org/f/6550/headphones-for-sale-trade
上搜集一些分类广告我创建了一个可以刮掉标题,价格,描述等的蜘蛛。它运行良好,但我无法弄清楚该分页如何在该特定网站上运行。我相信它是用javascript生成的?由于网址没有变化。
这是我抓第一页的代码
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from headfi_headphones.items import HeadfiHeadphonesItem
class MySpider(CrawlSpider):
name = "headfiheadphones"
allowed_domains = ["head-fi.org"]
start_urls = ["http://www.head-fi.org/f/6550/headphones-for-sale-trade"]
#rules = (
# Rule(SgmlLinkExtractor(allow=(), restrict_xpaths=("//a[@class='tooltip']",)), callback="parse_items", follow= True),
#)
def parse(self, response):
hxs = HtmlXPathSelector(response)
titles = hxs.xpath("//tr[@class='thread']")
items = []
for title in titles:
item = HeadfiHeadphonesItem()
item["title"] = title.select("td[@class='thread-col']/div[@class='shazam']/div[@class='thumbnail_body']/a[@class='classified-title']/text()").extract()
item["link"] = title.select("td[@class='thread-col']/div[@class='shazam']/div[@class='thumbnail_body']/a[@class='classified-title']/@href").extract()
item["img"] = title.select("td[@class='thread-col']/div[@class='shazam']/div[@class='thumbnail']/a[@class='thumb']/img/@src").extract()
item["saletype"] = title.select("td/strong/text()").extract()
item["price"] = title.select("td/div[@class='price']/span[@class='ctx-price']/text()").extract()
item["currency"] = title.select("td/div[@class='price']/span[@class='currency']/text()").extract()
items.append(item)
return items
它会返回这样的内容(我已经包含了一个条目)
{"img": ["http://cdn.head-fi.org/9/92/80x80px-ZC-9228072e_image.jpeg"], "title": ["Hifiman HE1000 Mint"], "saletype": ["For Sale"], "price": ["$2,000"], "currency": ["(USD)"], "link": ["/t/819200/hifiman-he1000-mint"]},
有没有办法通过我假设的javascript来填充每个页面(1-80左右),这些页面正在填充在桌面上?
答案 0 :(得分:0)
要正确解析Javascript,您应该考虑使用selenium
。该软件包可在此处获取:https://pypi.python.org/pypi/selenium。