关于scrapy的问题,为什么我不能解析整个页面,而只是页面上的第一条记录?

时间:2015-08-05 04:02:16

标签: javascript python html scrapy

我是斗志新手,并尝试按照示例(链接http://mherman.org/blog/2012/11/08/recursively-scraping-web-pages-with-scrapy/#.VcFiAjBVhBc)抓取craiglist。

但是,每次运行我的代码时,我只能获取页面上的第一条记录,附加代码中的样本就是这样,只包含每页上的第一条记录

link,title
/eby/npo/5155561393.html,Residential Administrator full time
/sfc/npo/5154403251.html,Sr. Director of Family Support Services
/eby/npo/5150280793.html,Veterans Program Internship
/eby/npo/5157174843.html,PROTECT OUR LIFE SAVING MEDICINE! $10-15/H
/eby/npo/5143949422.html,Program Supervisor - Multisystemic Therapy (MST)
/sby/npo/5145782515.html,Housing Specialist -- Santa Clara and Alameda Counties
/nby/npo/5148193893.html,Shipping Assistant for Non Profit
/sby/npo/5142160649.html,Companion for People with Developmental Disabilities
/sfc/npo/5139127862.html,Director of Vocational Services

我使用“scrapy crawl craig2 -o items_2.csv -t csv”来运行代码。 在此先感谢您的帮助

代码是:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item, Field
from scrapy.contrib.spiders import CrawlSpider#, Rule
from scrapy.selector import HtmlXPathSelector

from scrapy.http import Request
class CraigslistSampleItem(Item):
    title = Field()
    link = Field()



class MySpider(CrawlSpider):
    name = "craig2"
    allowed_domains = ["sfbay.craigslist.org"]
    start_urls = ["http://sfbay.craigslist.org/search/"]

   # rules = (Rule (SgmlLinkExtractor(allow=("index\d00\.html", ),restrict_xpaths=('//p[@class="button next"]',))
   # , callback="parse_items", follow= True),
    #)


    def start_requests(self):
            for i in range(9):
                yield Request("http://sfbay.craigslist.org/search/npo?s=" + str(i) + "00" , self.parse_items)


    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select('//span[@class="pl"]')
        items = []
        for ii in titles:
            item = CraigslistSampleItem()
            item ["title"] = ii.select("a/text()").extract()
            item ["link"] = ii.select("a/@href").extract()
            items.append(item)
            return(items)

2 个答案:

答案 0 :(得分:3)

请尝试以下代码:

class MySpider(CrawlSpider):
    name = "craig2"
    allowed_domains = ["sfbay.craigslist.org"]
    start_urls = ["http://sfbay.craigslist.org/search/npo?s=%s" % i for i in xrange(1,9)]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select('//span[@class="pl"]')
        items = []
        for ii in titles:
            item = CraigslistSampleItem()
            item ["title"] = ii.select("a/text()").extract()
            item ["link"] = ii.select("a/@href").extract()
            items.append(item)
            yield item

答案 1 :(得分:3)

您的代码问题在于return(items)循环中for。这意味着您将在第一个标题后立即返回。因此,即使每页有100个标题,您也会返回第一个。因此,将return(items)一个块向左移动,你很好:

def parse_items(self, response):
    hxs = HtmlXPathSelector(response)
    titles = hxs.select('//span[@class="pl"]')
    items = []
    for ii in titles:
        item = CraigslistSampleItem()
        item ["title"] = ii.select("a/text()").extract()
        item ["link"] = ii.select("a/@href").extract()
        items.append(item)
    return(items)

请注意,在这种情况下,return(items)for循环位于同一缩进级别,而不在循环中。这将在我的机器上返回CSV输出中的900个条目。

solution of Ooorza也很好,但你不需要全部。在这种情况下,解决方案是在循环中yield每个item。在这种情况下,您将for循环转换为生成器函数,该函数将解析的项目发送到进一步处理。在这种情况下,您无需append当前项目到列表。 parse_items方法如下所示:

def parse_items(self, response):
    hxs = HtmlXPathSelector(response)
    titles = hxs.select('//span[@class="pl"]')
    for ii in titles:
        item = CraigslistSampleItem()
        item ["title"] = ii.select("a/text()").extract()
        item ["link"] = ii.select("a/@href").extract()
        yield item