将Selenium HTML字符串传递给Scrapy,以将URL添加到要扫描的Scrapy URL列表中

时间:2014-05-13 13:09:06

标签: python selenium web-scraping scrapy scrapy-spider

我对Python,Scrapy和Selenium都很陌生。因此,您可以提供的任何帮助将非常感激。

我希望能够将从Selenium获得的HTML作为页面源并将其处理为Scrapy Response对象。主要原因是能够将Selenium Webdriver页面源中的URL添加到Scrapy将解析的URL列表中。

再次,任何帮助将不胜感激。

作为第二个问题,有没有人知道如何查看Scrapy找到和删除的URL列表中的URL列表?

谢谢!

******* ******* EDIT 这是我想要做的一个例子。我无法弄清楚第5部分。

class AB_Spider(CrawlSpider):
    name = "ab_spider"
    allowed_domains = ["abcdef.com"]
    #start_urls = ["https://www.kickstarter.com/projects/597507018/pebble-e-paper-watch-for-iphone-and-android"
    #, "https://www.kickstarter.com/projects/801465716/03-leagues-under-the-sea-the-seaquestor-flyer-subm"]
    start_urls = ["https://www.abcdef.com/page/12345"]

    def parse_abcs(self, response):
        sel = Selector(response)
        URL = response.url

        #part 1: check if a certain element is on the webpage
        last_chk = sel.xpath('//ul/li[@last_page="true"]')
        a_len = len(last_chk)

        #Part 2: if not, then get page via selenium webdriver
        if a_len == 0:
            #OPEN WEBDRIVER AND GET PAGE
            driver = webdriver.Firefox()
            driver.get(response.url)    

        #Part 3: run script to ineract with page until certain element appears
        while a_len == 0:
            print "ELEMENT NOT FOUND, USING SELENIUM TO GET THE WHOLE PAGE"

            #scroll down one time
            driver.execute_script("window.scrollTo(0, 1000000000);")

            #get page source and check if last page is there
            selen_html = driver.page_source
            hxs = Selector(text=selen_html)
            last_chk = hxs.xpath('//ul/li[@last_page="true"]')

            a_len = len(last_chk)

        driver.close()

        #Part 4: extract the URLs in the selenium webdriver URL 
        all_URLS = hxs.xpath('a/@href').extract()

        #Part 5: all_URLS add to the Scrapy URLs to be scraped

1 个答案:

答案 0 :(得分:1)

该方法只需yield Request个实例并提供回调:

class AB_Spider(CrawlSpider):
    ...

    def parse_abcs(self, response):
        ...

        all_URLS = hxs.xpath('a/@href').extract()

        for url in all_URLS:
            yield Request(url, callback=self.parse_page)

    def parse_page(self, response):
        # Do the parsing here