如何使用scrapy框架废弃网页?

时间:2017-12-18 14:42:23

标签: python web-scraping scrapy

我是webscrapping的新手。我已经开始学习 scrapy framework

我介绍了 scrapy 的基本教程。现在我正在尝试废弃this页。

根据this教程,要获得整个html页面,应该编写以下代码:

import scrapy


class ClothesSpider(scrapy.Spider):
    name = "clothes"

    start_urls = [
        'https://www.chumbak.com/women-apparel/GY1/c/',
    ]

    def parse(self, response):
        filename = 'clothes.html'
        with open(filename, 'wb') as f:
            f.write(response.body)

此代码运行正常。但我没有得到预期的结果。

当我打开 clothes.html 时,html数据与我从浏览器检查时的数据不同。 clothes.html 中缺少很多东西。

我不明白这里出了什么问题。请帮我推进。 任何帮助将不胜感激。

感谢。

1 个答案:

答案 0 :(得分:1)

此页面使用JavaScript将数据放在页面上。

在Chrome / Firefox中使用DevTool,您可以看到哪些网址使用JavaScript从服务器获取此数据(标签网络,过滤XHR)

然后你也可以尝试获取数据。

代码生成带有JSON数据的10页网址并下载,保存在单独的文件中,生成完整的网址到图像,Scrapy将它们下载到子文件夹fullScrapy还会保存output.json有关已下载图片的所有yield数据。

#!/usr/bin/env python3

import scrapy
#from scrapy.commands.view import open_in_browser
import json

class MySpider(scrapy.Spider):

    name = 'myspider'

    #allowed_domains = []

    #start_urls = ['https://www.chumbak.com/women-apparel/GY1/c/']

    #start_urls = [
    #    'https://api-cdn.chumbak.com/v1/category/474/products/?count_per_page=24&page=1',
    #    'https://api-cdn.chumbak.com/v1/category/474/products/?count_per_page=24&page=2',
    #    'https://api-cdn.chumbak.com/v1/category/474/products/?count_per_page=24&page=3',
    #]

    def start_requests(self):
        pages = 10
        url_template = 'https://api-cdn.chumbak.com/v1/category/474/products/?count_per_page=24&page={}'

        for page in range(1, pages+1):
            url = url_template.format(page)
            yield scrapy.Request(url)

    def parse(self, response):
        print('url:', response.url)

        #open_in_browser(response)

        # get page number
        page_number = response.url.strip('=')[-1]

        # save JSON in separated file
        filename = 'page-{}.json'.format(page_number)
        with open(filename, 'wb') as f:
           f.write(response.body)

        # convert JSON into Python's dictionary
        data = json.loads(response.text)

        # get urls for images
        for product in data['products']:
            #print('title:', product['title'])
            #print('url:', product['url'])
            #print('image_url:', product['image_url'])

            # create full url to image
            image_url = 'https://media.chumbak.com/media/catalog/product/small_image/260x455' + product['image_url']
            # send it to scrapy and it will download it
            yield {'image_urls': [image_url]}


        # download files
        #for href in response.css('img::attr(href)').extract():
        #   url = response.urljoin(src)
        #   yield {'file_urls': [url]}

        # download images and convert to JPG
        #for src in response.css('img::attr(src)').extract():
        #   url = response.urljoin(src)
        #   yield {'image_urls': [url]}

# --- it runs without project and saves in `output.csv` ---

from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
    'USER_AGENT': 'Mozilla/5.0',

    # save in CSV or JSON
    'FEED_FORMAT': 'json',     # 'cvs', 'json', 'xml'
    'FEED_URI': 'output.json', # 'output.cvs', 'output.json', 'output.xml'

    # download files to `FILES_STORE/full`
    # it needs `yield {'file_urls': [url]}` in `parse()`
    #'ITEM_PIPELINES': {'scrapy.pipelines.files.FilesPipeline': 1},
    #'FILES_STORE': '/path/to/valid/dir',

    # download images and convert to JPG
    # it needs `yield {'image_urls': [url]}` in `parse()`
    #'ITEM_PIPELINES': {'scrapy.pipelines.images.ImagesPipeline': 1},
    #'IMAGES_STORE': '/path/to/valid/dir',
    'ITEM_PIPELINES': {'scrapy.pipelines.images.ImagesPipeline': 1},
    'IMAGES_STORE': '.',
})
c.crawl(MySpider)
c.start()