我有URL列表,我需要从中抓取数据。网站在新驱动程序中打开每个URL时拒绝连接,所以我决定在新选项卡中打开每个URL(该网站允许这种方式)。我正在使用的以下代码
from selenium import webdriver
import time
from lxml import html
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
file = open('f:\\listofurls.txt', 'r')
for aa in file:
aa = aa.strip()
driver.execute_script("window.open('{}');".format(aa))
soup = html.fromstring(driver.page_source)
name = soup.xpath('//div[@class="name"]//text()')
title = soup.xpath('//div[@class="title"]//text()')
print(name, title)
time.sleep(3)
但是问题是所有URL一次都打开,而不是一次打开。
答案 0 :(得分:1)
您可以尝试以下代码:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from lxml import html
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
file = open('f:\\listofurls.txt', 'r')
for aa in file:
#open tab
driver.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't')
# You can use (Keys.CONTROL + 't') on other OSs
# Load a page
driver.get(aa)
# Make the tests...
soup = html.fromstring(driver.page_source)
name = soup.xpath('//div[@class="name"]//text()')
title = soup.xpath('//div[@class="title"]//text()')
print(name, title)
time.sleep(3)
driver.close()
答案 1 :(得分:0)
我认为您必须像这样在循环之前删除:
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
file = open('f:\\listofurls.txt', 'r')
aa = file.strip()
for i in aa:
driver.execute_script("window.open('{}');".format(i))
soup = html.fromstring(driver.page_source)
name = soup.xpath('//div[@class="name"]//text()')
title = soup.xpath('//div[@class="title"]//text()')
print(name, title)
time.sleep(3)