如何使用Scrapy进行URL抓取

时间:2018-05-30 10:56:14

标签: python scrapy web-crawler

我想抓取链接https://www.aparat.com/

我正确抓取它并获得带有标头标记的所有视频链接;像这样:

import scrapy
class BlogSpider(scrapy.Spider):
    name = 'aparatspider'
    start_urls = ['https://www.aparat.com/']
    def parse(self, response):
        print '=' * 80 , 'latest-trend :'
        ul5 = response.css('.block-grid.xsmall-block-grid-2.small-block-grid-3.medium-block-grid-4.large-block-grid-5.is-not-center')
        ul5 = ul5.css('ul').css('li')
        latesttrend = []
        for li5 in ul5:
           latesttrend.append(li5.xpath('div/div[1]/a').xpath('@onmousedown').extract_first().encode('utf8'))
           print(latesttrend)

现在我的问题是:

如何从داغ ترین ها标记中获取超过1000个的所有链接?目前,我只有60或更多或更少。

1 个答案:

答案 0 :(得分:1)

我使用以下代码进行了此操作:

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.http import Request


class aparat_hotnewsItem(scrapy.Item):

      videourl = scrapy.Field()


class aparat_hotnewsSpider(CrawlSpider):
      name = 'aparat_hotnews'
      allowed_domains = ['www.aparat.com']
      start_urls = ['http://www.aparat.com/']

      # Xpath for selecting links to follow
      xp = 'your xpath'

      rules = (
    Rule(LinkExtractor(restrict_xpaths=xp), callback='parse_item', follow=True),
      )

      def parse_item(self, response):

      item = aparat_hotnewsItem()

      item['videourl'] = response.xpath('your xpath').extract()
      yield item