使用scrapy来刮擦雅虎组的问题

时间:2011-02-04 02:13:47

标签: python screen-scraping yahoo scrapy

我是网络抓取的新手,刚刚开始尝试使用Python编写的抓取框架Scrapy。我的目标是刮掉旧的雅虎集团,因为他们没有提供API或任何其他方法来检索邮件存档。雅虎集团的设置使您必须先登录才能查看档案。

我认为我需要完成的步骤是:

  1. 登录yahoo
  2. 访问第一封邮件的网址并抓取
  3. 对下一条消息等重复步骤2
  4. 我开始粗略地使用scrapy蜘蛛来完成上述任务,这就是我到目前为止所拥有的。我想要观察的是登录工作,我能够检索第一条消息。一旦我开始工作,我将完成其余的工作:

    class Sg101Spider(BaseSpider):
        name = "sg101"
        msg_id = 1              # current message to retrieve
        max_msg_id = 21399      # last message to retrieve
    
        def start_requests(self):
            return [FormRequest(LOGIN_URL,
                formdata={'login': LOGIN, 'passwd': PASSWORD},
                callback=self.logged_in)]
    
        def logged_in(self, response):
            if response.url == 'http://my.yahoo.com':
                self.log("Successfully logged in. Now requesting 1st message.")
                return Request(MSG_URL % self.msg_id, callback=self.parse_msg,
                        errback=self.error)
            else:
                self.log("Login failed.")
    
        def parse_msg(self, response):
            self.log("Got message!")
            print response.body
    
        def error(self, failure):
            self.log("I haz an error")
    

    当我运行蜘蛛时,我看到它登录并发出第一条消息的请求。但是,我在scrapy的调试输出中看到的只有3个重定向,最终到达我首先要求的URL。但scrapy不会调用我的parse_msg()回调,并且爬行停止。这是scrapy输出的片段:

    2011-02-03 19:50:10-0600 [sg101] INFO: Spider opened
    2011-02-03 19:50:10-0600 [sg101] DEBUG: Redirecting (302) to <GET https://login.yahoo.com/config/verify?.done=http%3a//my.yahoo.com> from <POST https://login.yahoo.com/config/login>
    2011-02-03 19:50:10-0600 [sg101] DEBUG: Redirecting (meta refresh) to <GET http://my.yahoo.com> from <GET https://login.yahoo.com/config/verify?.done=http%3a//my.yahoo.com>
    2011-02-03 19:50:12-0600 [sg101] DEBUG: Crawled (200) <GET http://my.yahoo.com> (referer: None)
    2011-02-03 19:50:12-0600 [sg101] DEBUG: Successfully logged in. Now requesting 1st message.
    2011-02-03 19:50:12-0600 [sg101] DEBUG: Redirecting (302) to <GET http://launch.groups.yahoo.com/group/MyYahooGroup/auth?done=http%3A%2F%2Flaunch.groups.yahoo.com%2Fgroup%2FMyYahooGroup%2Fmessage%2F1> from <GET http://launch.groups.yahoo.com/group/MyYahooGroup/message/1>
    2011-02-03 19:50:12-0600 [sg101] DEBUG: Redirecting (302) to <GET http://launch.groups.yahoo.com/group/MyYahooGroup/auth?check=G&done=http%3A%2F%2Flaunch%2Egroups%2Eyahoo%2Ecom%2Fgroup%2FMyYahooGroup%2Fmessage%2F1> from <GET http://launch.groups.yahoo.com/group/MyYahooGroup/auth?done=http%3A%2F%2Flaunch.groups.yahoo.com%2Fgroup%2FMyYahooGroup%2Fmessage%2F1>
    2011-02-03 19:50:13-0600 [sg101] DEBUG: Redirecting (302) to <GET http://launch.groups.yahoo.com/group/MyYahooGroup/message/1> from <GET http://launch.groups.yahoo.com/group/MyYahooGroup/auth?check=G&done=http%3A%2F%2Flaunch%2Egroups%2Eyahoo%2Ecom%2Fgroup%2FMyYahooGroup%2Fmessage%2F1>
    2011-02-03 19:50:13-0600 [sg101] INFO: Closing spider (finished)
    2011-02-03 19:50:13-0600 [sg101] INFO: Spider closed (finished)
    

    我无法理解这一点。看起来雅虎正在重定向蜘蛛(也许是为了验证身份?)但它似乎首先回到了我想要访问的URL。但scrapy不会调用我的回调,我没有机会刮取数据或继续爬行。

    有没有人对正在发生的事情有什么想法和/或如何进一步调试?谢谢!

1 个答案:

答案 0 :(得分:5)

我认为雅虎正在重定向进行授权检查,并最终将我重定向回我真正想要的页面。然而,Scrapy已经看到了这个请求,因为它不想进入循环而停止。在我的例子中,解决方案是将dont_filter=True添加到Request构造函数中。这将指示Scrapy不过滤掉重复的请求。这在我的情况下很好,因为我事先知道我想要抓取哪些网址。

def logged_in(self, response):
    if response.url == 'http://my.yahoo.com':
        self.log("Successfully logged in. Now requesting message page.",
                level=log.INFO)
        return Request(MSG_URL % self.msg_id, callback=self.parse_msg,
                errback=self.error, dont_filter=True)
    else:
        self.log("Login failed.", level=log.CRITICAL)