Scrapy以各种步骤收集数据

时间:2015-10-10 23:30:13

标签: python web-scraping scrapy scrapy-spider

我试图从足球网站上抓取数据,但我遇到了一些困难。我有两种链接:

  • 1)websitefake.com/player/p1234
  • 2)websitefake.com/player/p1234/statistics

因此机器人应该:登录,并开始抓取每个链接。 这是我的尝试:

    class fanta(CrawlSpider):

    from website.players_id import players_id  #list with all players id like p40239

    name = 'bot2'
    login_page = "https://loginpage.com/"


    #### HELPER ####
    prova = "https://websitefake.com/player/p40239"
    #This is the part that generates the 600 player profiles
    start_urls = [prova.replace("p40239", i) for i in players_id] 


    def start_requests(self): #LOGIN
        return [FormRequest(
            self.login_page,
            formdata = {'name':'aaa', 'pass':'aaa'},
            callback = self.logged_in)]

    def logged_in(self, response):
        if "Attenzione" in response.body: #Login check
            self.log("Could not log in")
        else:
            self.log("Logged In") #If logged in start scraping
            for url in self.start_urls:
                yield Request(url, callback=self.parse)

    #Scrape the data from the https://websitefake.com/player/p1234 page
    def parse(self, response):
        name = response.css("response name::text").extract()
        surname =response.css("response surname::text").extract()
        team_name =response.css("response team_name::text").extract()
        role = response.css("response role_name::text").extract()

        #Add /statistics after the "p1234" creating the url for parse_statistics
        p = re.findall("p\d+", response.url) 
        new_string = p[0] + "/statistics"
        url_replaced = re.sub("p\d+", new_string, response.url)

        #Creating the Request for https://websitefake.com/player/p1234/statistics within the items to pass through the parse_stats
        r= Request(url_replaced, callback=self.parse_stats, encoding="utf-8")
        r.meta['name'] = name
        r.meta['surname'] = surname
        r.meta['team_name'] = team_name
        r.meta['role'] = role
        yield r

    def parse_stats(self, response):
        player = Player()
        stats = response.xpath("/response/Stat").extract() #N. of stat tags
        for s in range(1,len(stats)+1):
            time = response.xpath("/response/Stat[{}]/timestamp/text()".format(s)).extract()
            player['name'] = response.meta['name']
            player['surname'] = response.meta['surname']
            player['team_name'] = response.meta['team_name']
            player['role'] = response.meta['role']
            #### DATA FROM THE STATISTICS PAGE ####
            yield player

这个问题是,当我运行蜘蛛时,它会继续使用解析方法进行搜索("播放器的页面"并且不会跟随回调parse_stats)所以我得到了什么是:

  • -200 Crawled websitefake.com/player/p1234
  • -200 Crawled websitefake.com/player/p1111
  • -200 Crawled websitefake.com/player/p2222

而不是这个:

  • -200 Crawled websitefake.com/player/p1234
  • -200 Crawled websitefake.com/player/p1234/statistics
  • -200 Crawled websitefake.com/player/p1111
  • -200 Crawled websitefake.com/player/p1111/statistics

我已经尝试过一切都在我脑海中浮现,可能是我误解了收益,我不知道:S 感谢所有未来的答案!

1 个答案:

答案 0 :(得分:1)

您不能同时使用CrawlSpiderparse。由于您不使用任何rules,因此您可能希望使用普通Spider

请参阅documentation

中的警告