我需要 1.)使用BeautifulSoup软件包输出并存储网站列表。我的结果太长: 官方网站:www.vigeland.museum.no/en/vigeland-park。 2.)如何将类型“ bs4.element.Tag”转换为列表(基本上)
充其量,我只需要 'www.vigeland.museum.no'等
import requests # library to handle requests
from bs4 import BeautifulSoup as bs
#
r = requests.get('https://www.planetware.com/tourist-attractions-/oslo-n-
osl-oslo.htm')
soup = bs(r.content, 'lxml')
print('request successful')
#
web_site=soup.find_all('div', class_="web")
for web in web_site:
print(web.text)
type(web)
### My RESULT ###
Official site: www.vigeland.museum.no/en/vigeland-park
Official site: www.khm.uio.no/english/visit-us/viking-ship-museum/
Official site: www.nasjonalmuseet.no/en/
Official site: http://munchmuseet.no/en
Official site: http://www.kongehuset.no/seksjon.html?tid=28697
Official site: www.khm.uio.no/english
Official site: http://frammuseum.no
Official site: www.skiforeningen.no/en/holmenkollen
Official site: https://www.oslo.kommune.no/politikk-og-a
dministrasjon/radhuset/visit-the-oslo-city-hall/
Official site: www.akerbrygge.no/english
Official site: www.nhm.uio.no/english/
Official site: http://operaen.no/en/
bs4.element.Tag
答案 0 :(得分:0)
Split()
文本值,然后是strip()
空格,仅存储字符串的最后一个值。
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://www.planetware.com/tourist-attractions-/oslo-n-osl-oslo.htm')
soup = bs(r.content, 'lxml')
print('request successful')
web_site=soup.find_all('div', class_="web")
websiteofficial=[web.text.split('Official site:')[1].strip() for web in web_site]
print(websiteofficial)
['www.vigeland.museum.no/en/vigeland-park', 'www.khm.uio.no/english/visit-us/viking-ship-museum/', 'www.nasjonalmuseet.no/en/', 'http://munchmuseet.no/en', 'http://www.kongehuset.no/seksjon.html?tid=28697', 'www.khm.uio.no/english', 'http://frammuseum.no', 'www.skiforeningen.no/en/holmenkollen', 'https://www.oslo.kommune.no/politikk-og-administrasjon/radhuset/visit-the-oslo-city-hall/', 'www.akerbrygge.no/english', 'www.nhm.uio.no/english/', 'http://operaen.no/en/']
答案 1 :(得分:0)
一种更整齐,更有效的方法是使用class和child combinator获得子a
标签。然后,您将获得完全正确的数字,而无需进行字符串整理。
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://www.planetware.com/tourist-attractions-/oslo-n-osl-oslo.htm')
soup = bs(r.content, 'lxml')
links = [item['href'] for item in soup.select('.web > a')]