Python 3.7- PhantomJS-具有“窗口句柄/名称无效或关闭?”的Driver.get(URL)?

时间:2019-03-28 04:19:53

标签: web-scraping beautifulsoup phantomjs python-3.7

使用两个功能来抓取网站会导致driver.get错误。

我已经尝试了while和for循环的不同变体来使其工作。现在我得到一个driver.get错误。初始函数可以单独工作,但是当一个接一个地运行这两个函数时,会出现此错误。

import requests, sys, webbrowser, bs4, time
import urllib.request
import pandas as pd
from selenium import webdriver
driver = webdriver.PhantomJS(executable_path = 'C:\\PhantomJS\\bin\\phantomjs.exe')
jobtit = 'some+job'
location = 'some+city'
urlpag = ('https://www.indeed.com/jobs?q=' + jobtit + '&l=' + location + '%2C+CA')



def initial_scrape():
    data = []
    try:
        driver.get(urlpag)
        results = driver.find_elements_by_tag_name('h2')
        print('Finding the results for the first page of the search.')
        for result in results: # loop 2
            job_name = result.text
            link = result.find_element_by_tag_name('a')
            job_link = link.get_attribute('href')
            data.append({'Job' : job_name, 'link' : job_link})
            print('Appending the first page results to the data table.')
            if result == len(results):
                return
    except Exception:
        print('An error has occurred when trying to run this script.  Please see the attached error message and screenshot.')
        driver.save_screenshot('screenshot.png')
        driver.close()
    return data


def second_scrape():
    data = []
    try:
        #driver.get(urlpag)
        pages = driver.find_element_by_class_name('pagination')
        print('Variable nxt_pg is ' + str(nxt_pg))
        for page in pages:
            page_ = page.find_element_by_tag_name('a')
            page_link = page_.get_attribute('href')
            print('Taking a look at the different page links..')
            for page_link in range(1,pg_amount,1):
                driver.click(page_link)
                items = driver.find_elements_by_tag_name('h2')
                print('Going through each new page and getting the jobs for ya...')
                for item in items:
                    job_name = item.text
                    link = item.find_element_by_tag_name('a')
                    job_link = link.get_attribute('href')
                    data.append({'Job' : job_name, 'link' : job_link})
                    print('Appending the jobs to the data table....')
                if page_link == pg_amount:
                    print('Oh boy! pg_link == pg_amount...time to exit the loops')
                    return
    except Exception:
        print('An error has occurred when trying to run this script.  Please see the attached error message and screenshot.')
        driver.save_screenshot('screenshot.png')
        driver.close()
    return data

预期:

初始功能

  1. 从urlpag获取网站
  2. 按标签名称查找元素,并在附加到列表的同时循环浏览元素。
  3. 完成后,所有元素都会退出并返回列表。

第二功能

  1. 虽然仍在urlpag上,但按类名称查找元素并获取要刮取的下一页的链接。
  2. 当我们要抓取每个页面时,请仔细阅读每个页面,然后将元素附加到另一个表中。
  3. 一旦达到pg_amount限制-退出并返回最终列表。

实际:

初始功能

  1. 从urlpag获取网站
  2. 按标签名称查找元素,并在附加到列表的同时循环浏览元素。
  3. 完成后,所有元素都会退出并返回列表。

第二功能

  1. 查找类分页,打印nxt_variable,然后在下面抛出错误。
Traceback (most recent call last):
  File "C:\Users\User\AppData\Local\Programs\Python\Python37-32\Scripts\Indeedscraper\indeedscrape.py", line 23, in initial_scrape
    driver.get(urlpag)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 333, in get
    self.execute(Command.GET, {'url': url})
  File "C:\Users\User\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "C:\Users\User\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchWindowException: Message: {"errorMessage":"Currently Window handle/name is invalid (closed?)"

1 个答案:

答案 0 :(得分:0)

对于有此错误的个人,我最终切换到chromedriver并将其用于网络抓取。看来使用PhantomJS驱动程序有时会返回此错误。