刮擦:从2个级别刮擦多个物品

时间:2018-09-05 15:28:07

标签: python scrapy

我对抓狂还很陌生,我正在寻找个人锻炼的解决方案。我想做的是抓取IMDB热门排行榜电影,以获得排名,标题,年份和情节。 我设法浏览链接并爬行了电影页面,但是找不到找到每部电影排名的方法。

当前我的代码如下:

import scrapy
from tutorial.items import IMDB_dict # We need this so that Python knows about the item object

class MppaddressesSpider(scrapy.Spider):
name = "mppaddresses" # The name of this spider

# The allowed domain and the URLs where the spider should start crawling:
allowed_domains = ["imdb.com"]
start_urls = ['https://www.imdb.com/chart/top/']

def parse(self, response):
    # The main method of the spider. It scrapes the URL(s) specified in the
    # 'start_url' argument above. The content of the scraped URL is passed on
    # as the 'response' object.
    for rank in response.xpath(".//tbody[@class='lister-list']/tr/td[@class='titleColumn']/text()").extract():
        rank=" ".join(rank.split())
        item = IMDB_dict()
        item['rank'] = rank

    for url in response.xpath(".//tbody[@class='lister-list']/tr/td[@class='titleColumn']/a/@href").extract():
        # This loops through all the URLs found inside an element of class 'mppcell'

        # Constructs an absolute URL by combining the response’s URL with a possible relative URL:
        full_url = response.urljoin(url)
        print("FOOOOOOOOOnd URL: "+full_url)

        # The following tells Scrapy to scrape the URL in the 'full_url' variable
        # and calls the 'get_details() method below with the content of this
        # URL:
        #yield {'namyy' : response.xpath(".//tbody[@class='lister-list']/tr/td[@class='titleColumn']/text()").extract().strip("\t\r\n '\""),}
        yield scrapy.Request(full_url, callback=self.get_details)

def get_details(self, response):
    # This method is called on by the 'parse' method above. It scrapes the URLs
    # that have been extracted in the previous step.

    #item = OntariomppsItem() # Creating a new Item object
    # Store scraped data into that item:
    item = IMDB_dict()
    item['name'] = response.xpath(".//div[@class='title_bar_wrapper']/div[@class='titleBar']/div[@class='title_wrapper']/h1/text()").extract_first().strip("\t\r\n '\"")
    item['phone'] = response.xpath(".//div[@class='titleBar']/div[@class='title_wrapper']/h1/span[@id='titleYear']/a/text()").extract_first().strip("\t\r\n '\"")
    item['email'] = response.xpath(".//div[@class='plot_summary ']/div[@class='summary_text']/text()").extract_first().strip("\t\r\n '\"")
    # Return that item to the main spider method:
    yield item

此外,我的item.py具有:

import scrapy

class IMDB_dict(scrapy.Item):
    # define the fields for your item here like:
    rank = scrapy.Field()
    name = scrapy.Field()
    phone = scrapy.Field()
    email = scrapy.Field() 

主要问题:如何获得与标题相关的排名?

最后一个问题(如果可能)::我可以像访问相对URL(使用urljoin)一样访问URL,但是当它们是绝对URL时,我找不到方法。

非常感谢您的帮助。

最好

1 个答案:

答案 0 :(得分:0)

您需要使用rankget_details发送到meta回调:

def parse(self, response):

    for movie in response.xpath(".//tbody[@class='lister-list']/tr/td[@class='titleColumn']"):

        movie_rank = movie.xpath('./text()').re_first(r'(\d+)')

        movie_url = movie.xpath('./a/@href').extract_first()
        movie_full_url = response.urljoin(movie_url)
        print("FOOOOOOOOOnd URL: " + movie_url)

        yield scrapy.Request(movie_full_url, callback=self.get_details, meta={"rank": movie_rank})

def get_details(self, response):

    item = IMDB_dict()        
    item['rank'] = response.meta["rank"]
    item['name'] = response.xpath(".//div[@class='title_bar_wrapper']/div[@class='titleBar']/div[@class='title_wrapper']/h1/text()").extract_first().strip("\t\r\n '\"")
    item['phone'] = response.xpath(".//div[@class='titleBar']/div[@class='title_wrapper']/h1/span[@id='titleYear']/a/text()").extract_first().strip("\t\r\n '\"")
    item['email'] = response.xpath(".//div[@class='plot_summary ']/div[@class='summary_text']/text()").extract_first().strip("\t\r\n '\"")
    # Return that item to the main spider method:
    yield item

更新 如果您查看日志,则会发现此错误

  

AttributeError:'NoneType'对象没有属性'strip'

有时.extract_first()返回None,而您不能strip()。我建议您使用Scrapy Item Loaders