我正在使用以下代码从网页中获取所有<script>...</script>
内容(请参阅代码中的网址):
import urllib2
from bs4 import BeautifulSoup
import re
import imp
url = "http://racing4everyone.eu/2015/10/25/formula-e-201516formula-e-201516-round01-china-race/"
page = urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
script = soup.find_all("script")
print script #just to check the output of script
但是,BeautifulSoup会在网页的源代码(Chrome中的Ctrl + U)中进行搜索。但是,我想在网页的元素代码(Ctrl + Shift + I in chrome)中进行BeautifulSoup搜索。
我希望它能够这样做,因为我真正感兴趣的代码是在元素代码中,而不是在源代码中。
答案 0 :(得分:5)
首先要明白的是,BeautifulSoup
和urllib2
都不是浏览器。 urllib2
只会获取/下载您的初始&#34;静态&#34;页面 - 它不能像真正的浏览器那样执行JavaScript。因此,您将始终获得&#34;查看页面来源&#34;内容。
要解决您的问题 - 通过selenium
启动真实的浏览器,等待页面加载,获取.page_source
并将其传递给BeautifulSoup
进行解析:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("http://racing4everyone.eu/2015/10/25/formula-e-201516formula-e-201516-round01-china-race/")
# wait for the page to load
wait = WebDriverWait(driver, 10)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, ".fluid-width-video-wrapper")))
# get the page source
page_source = driver.page_source
driver.close()
# parse the HTML
soup = BeautifulSoup(page_source, "html.parser")
script = soup.find_all("script")
print(script)
这是一般方法,但您的情况稍有不同 - 有一个iframe
元素包含视频播放器。如果您想访问script
中的iframe
元素,则需要切换到该元素,然后获取.page_source
:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("http://racing4everyone.eu/2015/10/25/formula-e-201516formula-e-201516-round01-china-race/")
# wait for the page to load, switch to iframe
wait = WebDriverWait(driver, 10)
frame = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "iframe[src*=video]")))
driver.switch_to.frame(frame)
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, ".controls")))
# get the page source
page_source = driver.page_source
driver.close()
# parse the HTML
soup = BeautifulSoup(page_source, "html.parser")
script = soup.find_all("script")
print(script)