集成Selenium和Scrapy以单击过去的页面,然后保存cookie

时间:2014-10-28 02:06:53

标签: python selenium cookies web-scraping scrapy

我一直在搜索stackoverflow几个小时,但仍然无法找到适合我目前正在做的答案。我想使用Selenium通过初始页面点击,然后将cookie传输到Scrapy,然后抓取数据库。到目前为止,我一直被重定向到初始登录页面。

我基于抓取cookie并将它们放入此答案的请求中 scrapy authentication login with cookies

class HooversTest(scrapy.Spider):
    global starturls
    name = "hooversTest"
    allowed_domains = ["http://subscriber.hoovers.com"]
    login_page = ["http://subscriber.hoovers.com/H/home/index.html"]
    start_urls = ["http://subscriber.hoovers.com/H/company360/overview.html?companyId=99566395", 
              "http://subscriber.hoovers.com/H/company360/overview.html?companyId=10723000000000"]



def login(self, response):
    return Request(url=self.login_page,
        cookies=self.get_cookies(), callback=self.after_login)

def get_cookies(self):
    self.driver = webdriver.Firefox()
    self.driver.get("http://www.mergentonline.com/Hoovers/continue.php?status=sucess")
    elem = self.driver.find_element_by_name("Continue")
    elem.click()
    time.sleep(15)
    cookies = self.driver.get_cookies()
    #reduce(lambda r, d: r.update(d) or r, cookies, {})
    self.driver.close()
    return cookies

def parse(self, response):
    return Request(url="http://subscriber.hoovers.com/H/company360/overview.html?companyId=99566395",
        cookies=self.get_cookies(), callback=self.after_login)


def after_login(self, response):
    hxs = HtmlXPathSelector(response)
    print hxs.select('//title').extract()

0 个答案:

没有答案