Scrapy POST请求加载更多按钮

时间:2018-08-14 09:41:05

标签: ajax post pagination scrapy

我正在尝试抓取this的产品名称和价格。

页面底部有一个更多加载按钮,我尝试使用邮递员修改表单数据,'productBeginIndex':'resultsPerPage':似乎可以修改显示的产品数量。

但是,我不确定我的代码有什么问题-无论我如何调整值,它仍然会返回24个产品。我尝试使用FormRequest.from_response(),但它仍然只返回24种产品。

import scrapy


class PriceSpider(scrapy.Spider):
    name = "products"
    def parse(self, response):
        return [scrapy.FormRequest(url="https://www.fairprice.com.sg/baby-child",
                                   method='POST',
                                   formdata= {'productBeginIndex': '1', 'resultsPerPage': '1', },
                                   callback=self.logged_in)]

    def logged_in(self, response):
        # here you would extract links to follow and return Requests for
        # each of them, with another callback
      name = response.css("img::attr(title)").extract()
      price = response.css(".pdt_C_price::text").extract()

      for item in zip(name, price):
          scraped_info = {
                  "title" : item[0],
                  "value" : item[1]
                   }
          yield scraped_info

有人可以告诉我我失踪了吗?我如何实现循环以提取类别中的所有对象?

非常感谢您!

1 个答案:

答案 0 :(得分:0)

您应该发布到/ProductListingView(而不是/baby-child)(get方法也将起作用)。

要抓取所有对象,请在循环中修改参数beginIndex并产生一个新请求。(顺便说一下,修改productBeginIndex无效)

我们不知道产品总数,因此一种安全的方法是每次都爬网一组产品。通过修改custom_settings,您可以轻松控制从何处开始以及要刮擦多少产品。

关于如何输出到CSV格式的文件,请参考Scrapy pipeline to export csv file in the right format

为方便起见,我在下面添加了PriceItem类,您可以将其添加到items.py中。使用命令scrapy crawl PriceSpider -t csv -o test.csv,您将获得一个test.cvs文件。或者,您可以尝试CSVItemExporter

# OUTPUTS
# 2018-08-15 16:00:08 [PriceSpider] INFO: ['Nestle Nan Optipro Gro Growing Up Milk Formula -Stage 3', 'Friso Gold Growing Up Milk Formula - Stage 3']
# 2018-08-15 16:00:08 [PriceSpider] INFO: ['\n\t\t\t\t\t$199.50\n\t\t\t\t', '\n\t\t\t\t\t$79.00\n\t\t\t\t']
# 2018-08-15 16:00:08 [PriceSpider] INFO: ['Aptamil Gold+ Toddler Growing Up Milk Formula - Stage 3', 'Aptamil Gold+ Junior Growing Up Milk Formula - Stage 4']
# 2018-08-15 16:00:08 [PriceSpider] INFO: ['\n\t\t\t\t\t$207.00\n\t\t\t\t', '\n\t\t\t\t\t$180.00\n\t\t\t\t']
#
# \n and \t is not a big deal, just strip() it

import scrapy

class PriceItem(scrapy.Item):
  title = scrapy.Field()
  value = scrapy.Field()

class PriceSpider(scrapy.Spider):
  name = "PriceSpider"

  custom_settings = {
    "BEGIN_PAGE" : 0,
    "END_PAGE" : 2,
    "RESULTS_PER_PAGE" : 2,
  }

  def start_requests(self): 

    formdata = {
      "sType" : "SimpleSearch",
      "ddkey" : "ProductListingView_6_-2011_3074457345618269512",
      "ajaxStoreImageDir" : "%2Fwcsstore%2FFairpriceStorefrontAssetStore%2F",
      "categoryId" : "3074457345616686371",
      "emsName" : "Widget_CatalogEntryList_701_3074457345618269512",
      "beginIndex" : "0",
      "resultsPerPage" : str(self.custom_settings["RESULTS_PER_PAGE"]),
      "disableProductCompare" : "false",
      "catalogId" : "10201",
      "langId" : "-1",
      "enableSKUListView" : "false",
      "storeId" : "10151",
    }

    # loop to scrape different pages
    for i in range(self.custom_settings["BEGIN_PAGE"], self.custom_settings["END_PAGE"]):
      formdata["beginIndex"] = str(self.custom_settings["RESULTS_PER_PAGE"] * i)

      yield scrapy.FormRequest(
        url="https://www.fairprice.com.sg/ProductListingView",
        formdata = formdata,
        callback=self.logged_in
      )

  def logged_in(self, response):
      name = response.css("img::attr(title)").extract()
      price = response.css(".pdt_C_price::text").extract()

      self.logger.info(name)
      self.logger.info(price)

      # Output to CSV: refer to https://stackoverflow.com/questions/29943075/scrapy-pipeline-to-export-csv-file-in-the-right-format
      # 
      for item in zip(name, price):
        yield PriceItem(
          title = item[0].strip(),
          value = item[1].strip()
        )