使用scrapy在网址内刮痧

时间:2013-05-26 23:36:37

标签: python web-scraping scrapy

我正在尝试使用scrapy刮取craigslist并且已成功获取url但现在我想从URL中的页面中提取数据。以下是代码:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from craigslist.items import CraigslistItem

class craigslist_spider(BaseSpider):
    name = "craigslist_unique"
    allowed_domains = ["craiglist.org"]
    start_urls = [
        "http://sfbay.craigslist.org/search/sof?zoomToPosting=&query=&srchType=A&addFour=part-time",
        "http://newyork.craigslist.org/search/sof?zoomToPosting=&query=&srchType=A&addThree=internship",
    "http://seattle.craigslist.org/search/sof?zoomToPosting=&query=&srchType=A&addFour=part-time"
    ]


def parse(self, response):
   hxs = HtmlXPathSelector(response)
   sites = hxs.select("//span[@class='pl']")
   items = []
   for site in sites:
       item = CraigslistItem()
       item['title'] = site.select('a/text()').extract()
       item['link'] = site.select('a/@href').extract()
   #item['desc'] = site.select('text()').extract()
       items.append(item)
   hxs = HtmlXPathSelector(response)
   #print title, link        
   return items

我是scrapy的新手,无法弄清楚如何实际点击url(href)并获取该url页面内的数据并为所有url执行此操作。

2 个答案:

答案 0 :(得分:1)

start_urls方法

中逐一收到parse的回复

如果您只是想从start_urls个响应中获取信息,那么您的代码几乎可以。但是你的解析方法应该在你的craigslist_spider课程中,而不是在那个课程的一边。

def parse(self, response):
   hxs = HtmlXPathSelector(response)
   sites = hxs.select("//span[@class='pl']")
   items = []
   for site in sites:
       item = CraigslistItem()
       item['title'] = site.select('a/text()').extract()
       item['link'] = site.select('a/@href').extract()
       items.append(item)
   #print title, link
   return items

如果您想从start_urls获取一半信息,从anchor响应中获得的start_urls获取一半信息该怎么办?

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    sites = hxs.select("//span[@class='pl']")
    for site in sites:
        item = CraigslistItem()
        item['title'] = site.select('a/text()').extract()
        item['link'] = site.select('a/@href').extract()
        if item['link']:
            if 'http://' not in item['link']:
                item['link'] = urljoin(response.url, item['link'])
            yield Request(item['link'],
                          meta={'item': item},
                          callback=self.anchor_page)


def anchor_page(self, response):
    hxs = HtmlXPathSelector(response)
    old_item = response.request.meta['item'] # Receiving parse Method item that was in Request meta
    # parse some more values
    #place them in old_item
    #e.g
    old_item['bla_bla']=hxs.select("bla bla").extract()
    yield old_item

您只需要yield Request解析方法并使用old item meta发送Request

然后在old_item中提取anchor_page,在其中添加新值并简单地将其生成。

答案 1 :(得分:0)

你的xpaths有一个问题 - 它们应该是相对的。这是代码:

from scrapy.item import Item, Field
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector


class CraigslistItem(Item):
    title = Field()
    link = Field()


class CraigslistSpider(BaseSpider):
    name = "craigslist_unique"
    allowed_domains = ["craiglist.org"]
    start_urls = [
        "http://sfbay.craigslist.org/search/sof?zoomToPosting=&query=&srchType=A&addFour=part-time",
        "http://newyork.craigslist.org/search/sof?zoomToPosting=&query=&srchType=A&addThree=internship",
        "http://seattle.craigslist.org/search/sof?zoomToPosting=&query=&srchType=A&addFour=part-time"
    ]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        sites = hxs.select("//span[@class='pl']")
        items = []
        for site in sites:
            item = CraigslistItem()
            item['title'] = site.select('.//a/text()').extract()[0]
            item['link'] = site.select('.//a/@href').extract()[0]
            items.append(item)
        return items

如果通过以下方式运行:

scrapy runspider spider.py -o output.json

你会在output.json中看到:

{"link": "/sby/sof/3824966457.html", "title": "HR Admin/Tech Recruiter"}
{"link": "/eby/sof/3824932209.html", "title": "Entry Level Web Developer"}
{"link": "/sfc/sof/3824500262.html", "title": "Sr. Ruby on Rails Contractor @ Funded Startup"}
...

希望有所帮助。

相关问题