Selenium Python - 获取所有加载的URL列表(图像,脚本,样式表等)

时间:2018-06-04 10:58:59

标签: python selenium selenium-webdriver selenium-chromedriver

当Google Chrome通过Selenium加载网页时,它可能会加载页面所需的其他文件,例如来自<img src="example.com/a.png"><script src="example.com/a.js">标记。另外,CSS文件。

如何获取浏览器加载页面时下载的所有URL列表? (以编程方式,使用Python中的Selenium和chromedriver) 也就是说,&#34; Network&#34;中显示的文件列表。 Chrome中的开发者工具标签(显示已下载文件的列表)。

使用Selenium的示例代码,chromedriver:

from selenium import webdriver
options = webdriver.ChromeOptions()
options.binary_location = "/usr/bin/x-www-browser"
driver = webdriver.Chrome("./chromedriver", chrome_options=options)
# Load some page
driver.get("https://example.com")
# Now, how do I see a list of downloaded URLs that took place when loading the page above?

2 个答案:

答案 0 :(得分:1)

您可能希望查看BrowserMob代理。它可以捕获Web应用程序的性能数据(通过HAR格式),以及操纵浏览器行为和流量,例如将内容列入白名单和黑名单,模拟网络流量和延迟,以及重写HTTP请求和响应。

取自readthedocs,用法很简单,并且与selenium webdriver api很好地集成。您可以阅读有关BMP here的更多信息。

from browsermobproxy import Server
server = Server("path/to/browsermob-proxy")
server.start()
proxy = server.create_proxy()

from selenium import webdriver
profile  = webdriver.FirefoxProfile()
profile.set_proxy(proxy.selenium_proxy())
driver = webdriver.Firefox(firefox_profile=profile)


proxy.new_har("google")
driver.get("http://www.google.co.uk")
proxy.har # returns a HAR JSON blob

server.stop()
driver.quit()

答案 1 :(得分:0)

在他的answer继续提出@ GPT14的建议,我写了一个小脚本,它完全按照我想要的方式完成,并打印出某个页面加载的URL列表。

这使用BrowserMob Proxy。非常感谢@ GPT14建议使用它 - 它完全适合我们的目的。我已经从他的答案中更改了代码并将其改编为Google Chrome webdriver而不是Firefox。我还扩展了脚本,以便遍历HAR JSON输出并列出所有请求URL。请记住根据您的需要调整以下选项。

from browsermobproxy import Server
from selenium import webdriver

# Purpose of this script: List all resources (URLs) that
# Chrome downloads when visiting some page.

### OPTIONS ###
url = "https://example.com"
chromedriver_location = "./chromedriver" # Path containing the chromedriver
browsermobproxy_location = "/opt/browsermob-proxy-2.1.4/bin/browsermob-proxy" # location of the browsermob-proxy binary file (that starts a server)
chrome_location = "/usr/bin/x-www-browser"
###############

# Start browsermob proxy
server = Server(browsermobproxy_location)
server.start()
proxy = server.create_proxy()

# Setup Chrome webdriver - note: does not seem to work with headless On
options = webdriver.ChromeOptions()
options.binary_location = chrome_location
# Setup proxy to point to our browsermob so that it can track requests
options.add_argument('--proxy-server=%s' % proxy.proxy)
driver = webdriver.Chrome(chromedriver_location, chrome_options=options)

# Now load some page
proxy.new_har("Example")
driver.get(url)

# Print all URLs that were requested
entries = proxy.har['log']["entries"]
for entry in entries:
    if 'request' in entry.keys():
        print entry['request']['url']

server.stop()
driver.quit()