我一直试图从this网址中删除一些数据。但是,我无法废弃看看你是否能识别害虫。有一个名为" collapsefaq-content"哪个beautifulsoup无法找到。
我想废弃此类下的所有
标记数据。
这是我的代码:
import urllib.request
import csv
import pandas as pd
from bs4 import BeautifulSoup
import html5lib
import lxml
page_url = 'http://www.agriculture.gov.au/pests-diseases-weeds/plant#identify-pests-diseases'
page = urllib.request.urlopen(page_url)
soup = BeautifulSoup(page, 'html.parser')
file_name = "alpit.csv"
main_url = []
see_if_you_can = []
see_if_you_can.append("Indetify")
legal =[]
legal.append('Legal Stuff')
specimen =[]
specimen.append("Specimen")
insect_name = []
insect_name.append("Name of insect")
disease_name = []
disease_name.append("Name")
disease_list = []
disease_list.append("URL")
origin = []
origin.append('Origin')
for insectName in soup.find_all('li', attrs={'class': 'flex-item'}):
if(str(insectName.a.attrs['href']).startswith('/')):
# to go in the link and extract data
main_url.append('http://www.agriculture.gov.au' +
insectName.a.attrs['href'])
print(insectName.text.strip()) # disease name
for name in insectName.find_all('img'):
print('http://www.agriculture.gov.au' +
name.attrs['src']) # disease link
disease_list.append('http://www.agriculture.gov.au' +
name.attrs['src'])
for disease in main_url:
if(True):
# disease = 'http://www.agriculture.gov.au'+disease
inner_page = urllib.request.urlopen(disease)
soup_list = BeautifulSoup(inner_page, 'lxml')
for detail in soup_list.find_all('strong'):
if(detail.text == 'Origin: '):
origin.append(detail.next_sibling.strip())
print(detail.next_sibling.strip())
for name in soup_list.find_all('div', class_='pest-header-content'):
print(name.h2.text)
insect_name.append(name.h2.text)
for textin in soup_list.find_all('div',class_ = "collapsefaq-content"):
print("*******")
print(textin.text)
# print('alpit')
# print(len(disease_list))
# print(len(origin))
df = pd.DataFrame([insect_name, disease_list, origin,see_if_you_can, legal, specimen])
df = df.transpose()
df.to_csv(file_name, index=False, header=None)
# with open('alpit.csv','w') as myfile:
# wr = csv.writer(myfile)
# for val in disease_list:
# wr.writerow([val])
# for val in origin:
# wr.writerow([val])
即使是" ***"没有打印。 谁能告诉我这里我做错了什么......?
答案 0 :(得分:2)
这是您获取所需部分内容的方法。我想,你可以根据你的要求理清其余部分。
import requests
from bs4 import BeautifulSoup
URL = 'http://www.agriculture.gov.au/pests-diseases-weeds/plant/khapra-beetle#see-if-you-can-identify-the-pest'
res = requests.get(URL)
soup = BeautifulSoup(res.text, "lxml")
container = soup.select_one("#collapsefaq h3[title='expand section']")
print(container.get_text(strip=True))
输出:
See if you can identify the pest
您可以使用以下方式访问其余内容:
container = soup.select_one("#collapsefaq h3[title='expand section']").find_next_sibling()
print(container.get_text(strip=True))