无法通过xpath找到元素

时间:2017-01-08 20:00:29

标签: python selenium xpath web-scraping

我有以下xpath:

<a class="btn btn-small downloadlink" rel="nofollow" data-
toggle="tooltip" data-format="ico" data-icon-id="1715795" 
href="/icons/1715795/download/ico" data-original-title="Download this 
icon in ICO format for use in Windows."><i class="download-icon"></i><b>
ICO</b></a>

<a class="btn btn-small downloadlink" rel="nofollow" data-
toggle="tooltip" data-format="icns" data-icon-id="1715795" 
href="/icons/1715795/download/icns" data-original-title="Download this 
icon in ICNS format for use in Apple OS X."><i class="download-icon"></i><b>
ICNS</b></a>

(从这里:https://www.iconfinder.com/icons/1715795/earth_planet_space_icon#size=128

使用selenium,我想选择与包含xpath的xpath对应的元素:

data-format="icns"

我尝试过类似的事情:

driver.find_element_by_xpath('//*[@data-format="icns"]')

但它提供了以下错误消息:

selenium.common.exceptions.NoSuchElementException: Message: no such 
element: Unable to locate element: {"method":"xpath","selector":"//*
[@data-format=icns]"}

问:如何选择第二个元素?

我知道我可以从检查中复制xpath,但这会给我留下一个非常不稳定的抓取脚本。由于页面布局的微小变化可能意味着我的xpath表达式将不再有效。

提前致谢!

2 个答案:

答案 0 :(得分:1)

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import requests
import lxml.html

url = "https://www.iconfinder.com/icons/1715795/earth_planet_space_icon#size=128"

#if you use selenium, comment this line.
resp = requests.get(url)

#you can replace this line if you use selenium
#source_code = browser.page_source

source_code = resp.text

root = lxml.html.fromstring(source_code)

id_icon = set(root.xpath('//*[@data-format="icns"]//@data-icon-id'))

#just if you want download all the icons

for id in id_icon:
    url = 'https://www.iconfinder.com/icons/{0}/check-download/icns'.format(id)

    local_filename = '{0}.icns'.format(id)
    resp = requests.get(url, stream=True)
    with open(local_filename, 'wb') as f:
        for chunk in resp.iter_content(chunk_size=1024): 
            if chunk:
                f.write(chunk)

    print "downloaded  {0}".format(local_filename)

答案 1 :(得分:0)

如果你想废弃一个网站并下载一些对象我建议使用scrapy - 它比硒更好/更快/更可靠。

例如:

$ scrapy shell
2017-01-08 23:36:58 [scrapy] INFO: Scrapy 1.2.1 started (bot: scrapybot)
2017-01-08 23:36:58 [scrapy] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter'}
.... debug info ....
2017-01-08 23:36:58 [scrapy] INFO: Enabled item pipelines:
[]
2017-01-08 23:36:58 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x10cfe7cc0>
[s]   item       {}
[s]   settings   <scrapy.settings.Settings object at 0x10cfe7eb8>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser

# fetch data and prepare all helpers to work
>>> fetch('https://www.iconfinder.com/icons/1715795/earth_planet_space_icon#')

# now object `response` contains result of our request
>>> response
<200 https://www.iconfinder.com/icons/1715795/earth_planet_space_icon>

# let's check links:
>>> response.xpath('//*[@data-format="icns"]')
[<Selector xpath='//*[@data-format="icns"]' data='<a class="btn btn-small downloadlink" re'>, <Selector xpath='//*[@data-format="icns"]' data='<a class="btn btn-small downloadlink"
re'>, <Selector xpath='//*[@data-format="icns"]' data='<a class="btn btn-small downloadlink" re'>, <Selector xpath='//*[@data-format="icns"]' data='<a class="btn btn-small downloadl
ink" re'>, ...... ]

# extract first of them
>>> link = response.xpath('//*[@data-format="icns"]')[0]
>>> link.extract()
'<a class="btn btn-small downloadlink" rel="nofollow" title="Download this icon in ICNS format for use in Apple OS X." data-toggle="tooltip" data-format="icns" data-icon-id="1715795
" href="/icons/1715795/download/icns"><i class="download-icon"></i><b>\n    ICNS</b></a>'

# URL of a link
>>> link.select('@href').extract_first()
'/icons/1715795/download/icns'

# add host and other params to form a full URL
>>> response.urljoin(link.select('@href').extract_first())
'https://www.iconfinder.com/icons/1715795/download/icns'

事实上,Scrapy能够做得更多。例如,它可以递归查找并下载所有链接。