BeautifulSoap为具有特定类的div中的所有img获取多个元素

时间:2019-03-29 08:13:31

标签: python web-scraping beautifulsoup

我正在尝试使用image-file imgdiv下的id标签中的previewImages标签中的src属性(按原样的相对链接)中链接(我不需要<div id="previewImages"> <div class="thumb"> <a><img src="https://example.com/s/15.jpg" image-file="/image/15.jpg" /></a> </div> <div class="thumb"> <a><img src="https://example.com/s/2.jpg" image-file="/image/2.jpg" /> </a> </div> <div class="thumb"> <a><img src="https://example.com/s/0.jpg" image-file="/image/0.jpg" /> </a> </div> <div class="thumb"> <a><img src="https://example.com/s/3.jpg" image-file="/image/3.jpg" /> </a> </div> <div class="thumb"> <a><img src="https://example.com/s/4.jpg" image-file="/image/4.jpg" /> </a> </div> </div> 链接。)

以下是示例HTML:

import sys
import urllib2
from bs4 import BeautifulSoup

quote_page = sys.argv[1] # this should be the first argument on the command line
page = urllib2.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')

images_box = soup.find('div', attrs={'id': 'previewImages'})
if images_box.find('img'):
    imagesurl = images_box.find('img').get('image-file')
print imagesurl

我尝试了以下操作,但它只给了我第一个链接,而不是全部:

image-file

如何通过img获取divclass previewImages标签的static attritube中的所有链接?

4 个答案:

答案 0 :(得分:1)

使用.findAll

例如:

from bs4 import BeautifulSoup

html = """<div id="previewImages">
  <div class="thumb"> <a><img src="https://example.com/s/15.jpg" image-file="/image/15.jpg" /></a> </div>
  <div class="thumb"> <a><img src="https://example.com/s/2.jpg" image-file="/image/2.jpg" /> </a> </div>
  <div class="thumb"> <a><img src="https://example.com/s/0.jpg" image-file="/image/0.jpg" /> </a> </div>
  <div class="thumb"> <a><img src="https://example.com/s/3.jpg" image-file="/image/3.jpg" /> </a> </div>
  <div class="thumb"> <a><img src="https://example.com/s/4.jpg" image-file="/image/4.jpg" /> </a> </div>
</div>"""

soup = BeautifulSoup(html, "html.parser")
images_box = soup.find('div', attrs={'id': 'previewImages'})
for link in images_box.findAll("img"):
    print link.get('image-file')

输出:

/image/15.jpg
/image/2.jpg
/image/0.jpg
/image/3.jpg
/image/4.jpg

答案 1 :(得分:1)

我认为将ID与属性选择器一起传递给select

from bs4 import BeautifulSoup as bs
html = '''
<div id="previewImages">
  <div class="thumb"> <a><img src="https://example.com/s/15.jpg" image-file="/image/15.jpg" /></a> </div>
  <div class="thumb"> <a><img src="https://example.com/s/2.jpg" image-file="/image/2.jpg" /> </a> </div>
  <div class="thumb"> <a><img src="https://example.com/s/0.jpg" image-file="/image/0.jpg" /> </a> </div>
  <div class="thumb"> <a><img src="https://example.com/s/3.jpg" image-file="/image/3.jpg" /> </a> </div>
  <div class="thumb"> <a><img src="https://example.com/s/4.jpg" image-file="/image/4.jpg" /> </a> </div>
</div>
'''
soup = bs(html, 'lxml')
links = [item['image-file'] for item in soup.select('#previewImages [image-file]')]
print(links)

答案 2 :(得分:0)

BeautifulSoup具有方法.find_all()-检查docs。这是在代码中使用它的方式:

import sys
import urllib2
from bs4 import BeautifulSoup

quote_page = sys.argv[1] # this should be the first argument on the command line
page = urllib2.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')

images_box = soup.find('div', attrs={'id': 'previewImages'})
links = [img['image-file'] for img in images_box('img')]

print links   # in Python 3: print(links)

答案 3 :(得分:0)

要加起来,以防万一我们使用lxml做相同的情况,

{\n ID = \"d9a7c7bf-781d-47b3-bb4e-e1022ec4ce1b\",
\n Name = "Headquarters"\n}

输出 ['/image/15.jpg','/ image / 2.jpg','/ image / 0.jpg','/ image / 3.jpg','/ image / 4.jpg']