我是Python编程的初学者。我正在使用python中的bs4模块练习网络抓取。
我已经从网页中提取了一些字段,但是当我尝试将它们写入.xls文件时,.xls文件除标题外仍然为空。 请告诉我我在哪里做错了,如果可能的话,建议要做什么。
from bs4 import BeautifulSoup as bs
import pandas as pd
res = requests.get('https://rwbj.com.au/find-an-agent.html')
soup = bs(res.content, 'lxml')
data = soup.find_all("div",{"class":"fluidgrid-cell fluidgrid-cell-2"})
records = []
name =[]
phone =[]
email=[]
title=[]
location=[]
for item in data:
name = item.find('h3',class_='heading').text.strip()
phone = item.find('a',class_='text text-link text-small').text.strip()
email = item.find('a',class_='text text-link text-small')['href']
title = item.find('div',class_='text text-small').text.strip()
location = item.find('div',class_='text text-small').text.strip()
records.append({'Names': name, 'Title': title, 'Email': email, 'Phone': phone, 'Location': location})
df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
df=df.drop_duplicates()
df.to_excel(r'C:\Users\laptop\Desktop\R&W.xls', sheet_name='MyData2', index = False, header=True)
答案 0 :(得分:2)
如果您不想使用硒,则可以制作相同的帖子请求网页。这将给您一个xml
响应,您可以使用Beautifulsoup
进行解析以获得所需的输出。
我们可以使用检查工具中的“网络”标签来获取正在发出的请求以及该请求的表单数据。
接下来,我们必须使用python-requests
发出相同的请求,然后解析输出。
import requests
from bs4 import BeautifulSoup
import pandas as pd
number_of_agents_required=20 # they only have 20 on the site
payload={
'act':'act_fgxml',
'15[offset]':0,
'15[perpage]':number_of_agents_required,
'require':0,
'fgpid':15,
'ajax':1
}
records=[]
r=requests.post('https://www.rwbj.com.au/find-an-agent.html',data=payload)
soup=BeautifulSoup(r.text,'lxml')
for row in soup.find_all('row'):
name=row.find('name').text
title=row.position.text.replace('&','&')
email=row.email.text
phone=row.phone.text
location=row.office.text
records.append([name,title,email,phone,location])
df=pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
df.to_excel('R&W.xls', sheet_name='MyData2', index = False, header=True)
输出:
答案 1 :(得分:0)
您可以使用诸如硒之类的方法来实现内容的javascript呈现。然后,您可以获取page_source以继续执行脚本。我特意保留了您的脚本,只添加了新行以等待内容。
您可以无头运行selenium或切换为使用HTMLSession。
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
d = webdriver.Chrome()
d.get('https://rwbj.com.au/find-an-agent.html')
WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "h3")))
soup = bs(d.page_source, 'lxml')
d.quit()
data = soup.find_all("div",{"class":"fluidgrid-cell fluidgrid-cell-2"})
records = []
name =[]
phone =[]
email=[]
title=[]
location=[]
for item in data:
name = item.find('h3',class_='heading').text.strip()
phone = item.find('a',class_='text text-link text-small').text.strip()
email = item.find('a',class_='text text-link text-small')['href']
title = item.find('div',class_='text text-small').text.strip()
location = item.find('div',class_='text text-small').text.strip()
records.append({'Names': name, 'Title': title, 'Email': email, 'Phone': phone, 'Location': location})
df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
print(df)
我可能会考虑以下问题,具体取决于是否每个人都有所有物品:
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
import pandas as pd
options = Options()
options.headless = True
d = webdriver.Chrome(options = options)
d.get('https://rwbj.com.au/find-an-agent.html')
WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "h3")))
soup = bs(d.page_source, 'lxml')
d.quit()
names = [item.text for item in soup.select('h3')]
titles = [item.text for item in soup.select('h3 ~ div:nth-of-type(1)')]
tels = [item.text for item in soup.select('h3 + a')]
emails = [item['href'] for item in soup.select('h3 ~ a:nth-of-type(2)')]
locations = [item.text for item in soup.select('h3 ~ div:nth-of-type(2)')]
records = list(zip(names, titles, tels, emails, positions))
df = pd.DataFrame(records,columns=['Names','Title','Phone','Email','Location'])
print(df)