无法获取来自多个页面的所有链接不变的URL

时间:2018-09-30 22:26:53

标签: python selenium web-scraping beautifulsoup

我想从10页中获取所有链接,但是我无法单击第二页链接。从网址https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import  bs4

from selenium import webdriver
import time

url = "https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All"
driver = webdriver.Chrome("C:\\Users\Ritesh\PycharmProjects\BS\drivers\chromedriver.exe")
driver.get(url)

def getnames(driver):
    soup = bs4.BeautifulSoup(driver.page_source, 'lxml')
    sink = soup.find("div", {"class": "gsc-results gsc-webResult"})
    links = sink.find_all('a')
    for link in links:
        try:
            print(link['href'])
        except:
            print("")

while True:
    getnames(driver)
    time.sleep(5)
    nextpage = driver.find_element_by_link_text("2")
    nextpage.click()
    time.sleep(2)

请帮助我解决这个问题。

1 个答案:

答案 0 :(得分:0)

由于页面中包含动态元素,因此您将需要使用硒。下面的代码将从每个页面获取所有链接:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait 
from selenium.webdriver.support import expected_conditions as EC
import time

url = "https://10times.com/search?cx=partner-pub-8525015516580200%3Avtujn0s4zis&cof=FORid%3A10&ie=ISO-8859-1&q=%22Private+Equity%22&searchtype=All"
driver = webdriver.Chrome("C:\\Users\Ritesh\PycharmProjects\BS\drivers\chromedriver.exe")
driver.get(url)

WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div""")))


pages_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div/div""")

all_urls = []

for page_index in range(len(pages_links)):

    WebDriverWait(driver, 20).until(
 EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div""")))

    pages_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div[11]/div/div""")

    page_link = pages_links[page_index]
    print "getting links for page: ", page_link.text

    page_link.click()

    time.sleep(1)


    #wait untill all links are loaded
    WebDriverWait(driver, 20).until(
  EC.presence_of_element_located((By.XPATH, """//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]""")))

    first_link = driver.find_element_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[1]/div[1]/div[1]/div/a""")

    results_links = driver.find_elements_by_xpath("""//*[@id="___gcse_0"]/div/div/div/div[5]/div[2]/div/div/div[2]/div/div[1]/div[1]/div/a""")

    urls = [first_link.get_attribute("data-cturl")] + [l.get_attribute("data-cturl") for l in results_links]

    all_urls = all_urls + urls


driver.quit()

您可以按原样使用此代码,也可以尝试与已有的代码结合使用。

请注意,它不考虑广告链接,因为我认为您不需要它们,对吧?

让我知道这是否有帮助。