我对python和web抓取真的很陌生,想抓取以下页面: http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php
我想遍历每个参展商链接,并获取联系方式。然后,我需要在所有77页中都这样做。
我可以从页面中提取所需的信息,但是当涉及到构建函数和循环时,我会不断出错,并且找不到用于循环浏览多个页面的简单结构。
到目前为止,这是我在Jupyter笔记本中拥有的:
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
import time
import pandas as pd
import requests
from bs4 import BeautifulSoup
url = 'http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text = requests.get(url).text
page1 = BeautifulSoup(text, "html.parser")
def get_data(url):
text = requests.get(url).text
page2 = BeautifulSoup(text, "html.parser")
title = page2.find('h1', attrs={'class':'hl_2'}).getText()
content = page2.find('div', attrs={'class':'content'}).getText()
phone = page2.find('div', attrs={'class':'sico ico_phone'}).getText()
email = page2.find('a', attrs={'class':'sico ico_email'}).getText
webpage = page2.find('a', attrs={'class':'sico ico_link'}).getText
data = {'Name': [title],
'Address': [content],
'Phone number': [phone],
'Email': [email],
'Web': [web]
}
df = pd.DataFrame()
for a in page1.findAll('a', attrs={'class':'initial_noline'}):
df2 = get_data(a['href'])
df = pd.concat([df, df2])
AttributeError: 'NoneType' object has no attribute 'getText'
我知道我不断遇到的错误是因为我是新手,对函数语法和循环语法感到困惑。
对此建议结构的任何指导将不胜感激。
答案 0 :(得分:0)
这里是经过调试的版本。
import pandas as pd
import requests
from bs4 import BeautifulSoup
url = 'http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text = requests.get(url).text
page1 = BeautifulSoup(text, "html.parser")
def get_data(url):
text = requests.get(url).text
page2 = BeautifulSoup(text, "html.parser")
title = page2.find('h1', attrs={'class':'hl_2'}).getText()
content = page2.find('div', attrs={'class':'content'}).getText()
phone = page2.find('div', attrs={'class':'sico ico_phone'}).getText()
email = page2.find('div', attrs={'class':'sico ico_email'}).getText
webpage = page2.find('div', attrs={'class':'sico ico_link'}).getText
data = [[title, content,phone, email, webpage]]
return data
df = pd.DataFrame()
for a in page1.findAll('a', attrs={'class':'initial_noline'}):
if 'kid=' not in a['href'] : continue
print('http://www.interzum.com' + a['href'])
data = get_data('http://www.interzum.com' + a['href'])
df.append(data)
答案 1 :(得分:0)
非常感谢大家提供的所有帮助,我一直坚持不懈地努力,并设法获得了我所需的一切。我的代码如下:
import pandas as pd
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException
import time
binary = FirefoxBinary('geckodriver.exe')
driver = webdriver.Firefox()
driver.get('http://www.interzum.com/exhibitors-and-products/exhibitor- index/exhibitor-index-15.php')
url = 'http://www.interzum.com/exhibitors-and-products/exhibitor-index/exhibitor-index-15.php'
text = requests.get(url).text
page1 = BeautifulSoup(text, "html.parser")
def get_data(url, tries=0, max_tries=3):
text_test2 = requests.get(url).text
page2 = BeautifulSoup(text_test2, "html.parser")
try:
title = page2.find('h1', attrs={'class':'hl_2'}).text
content = page2.find('div', attrs={'class':'cont'}).text
phone = page2.find('div', attrs={'class':'sico ico_phone'}).text
email_div = page2.find('div', attrs={'class':'sico ico_email'})
email = email_div.find('a', attrs={'class': 'xsecondarylink'})['href']
if page2.find_all("div", {"class": "sico ico_link"}):
web_div = page2.find('div', attrs={'class':'sico ico_link'})
web = web_div.find('a', attrs={'class':'xsecondarylink'})['href']
except:
if tries < max_tries:
tries += 1
print("try {}".format(tries))
return get_data(url, tries)
data = {'Name': [title],
'Street address': [content],
'Phone number': [phone],
'Email': [email],
'Web': [web]
}
return pd.DataFrame(data=data)
df = pd.DataFrame()
for i in range(0,80):
print(i)
page1 = BeautifulSoup(driver.page_source, 'html.parser')
for div in page1.findAll('div', attrs={'class':'item'}):
for a in div.findAll('a', attrs={'class':'initial_noline'}):
if 'kid=' not in a['href'] : continue
print('http://www.interzum.com' + a['href'])
data = get_data('http://www.interzum.com' + a['href'])
df = pd.concat([df, data])
next_button = driver.find_element_by_class_name('slick-next')
next_button.click()
time.sleep(20)
df.to_csv('result.csv')
此代码一直有效,直到到达第二页上的第二个链接。 该链接没有网站,我正在努力整理以下内容:如果存在带有此类的href,则请拉该网站;如果不存在,则移至下一个
但是,出现以下错误: UnboundLocalError:赋值之前引用了本地变量'web'
所以我的代码显然没有这样做!
任何有关如何解决该问题的指南都将不胜感激!
再次感谢大家的帮助:)