我需要此代码的标题,地址,电话号码,说明。到目前为止,我已经做到了。现在我被卡住了,请帮助Web剪贴新手
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
from bs4 import BeautifulSoup as soup
import urllib.request
import pandas as pd
withurllib.request.urlopen("http://buildingcongress.org/list/category/architects-6") as url:
s = url.read()
page_soup = soup(s, 'html.parser')
listings = []
for rows in page_soup.find_all("div"):
if ("mn-list-item-odd" in rows["mn-listing mn-nonsponsor mn-search-result-priority-highlight-30"]) or ("mn-list-item-even" in rows["mn-listing mn-nonsponsor mn-search-result-priority-highlight-30"]):
name = rows.find("div", class_="mn-title").a.get_text()
我在循环中遇到错误。我被卡住了,请帮助
答案 0 :(得分:1)
使用正则表达式搜索类,然后进行迭代。
import re
import requests
from bs4 import BeautifulSoup
url = "http://buildingcongress.org/list/category/architects-6"
res = requests.get(url)
soup = BeautifulSoup(res.text,"lxml")
for rows in soup.find_all('div',class_=re.compile('mn-list-item-odd|mn-list-item-even')):
name = rows.find("div", class_="mn-title").find('a').text
print(name)
答案 1 :(得分:1)
您可以根据需要使用以下内容访问每个页面
import requests
from bs4 import BeautifulSoup as bs
import pandas as pd
import re
results = []
with requests.Session() as s:
r = s.get('http://buildingcongress.org/list/category/architects-6')
soup = bs(r.content, 'lxml')
links = [item['href'] for item in soup.select('.mn-title a')]
for link in links:
r = s.get(link)
soup = bs(r.content, 'lxml')
name = soup.select_one('[itemprop="name"]').text
address = re.sub(r'\n|\r', ' ' , ' '.join([item.text.strip() for item in soup.select('.mn-address1, .mn-citystatezip')]))
tel = soup.select_one('.mn-member-phone1').text
desc = re.sub(r'\n|\r','',soup.select_one('#about .mn-section-content').text) if soup.select_one('#about .mn-section-content') else 'No desc'
row = [name, address, tel, desc]
results.append(row)
df = pd.DataFrame(results, columns = ['name', 'address', 'tel', 'desc'])
print(df)