BeautifulSoup在其中获取链接和信息

时间:2018-06-25 07:11:04

标签: python dataframe web-scraping beautifulsoup

我想抓取一个网站。网站每页有10个投诉预览。我编写了此脚本,以获得10个投诉的链接以及每个链接内的一些信息。当我运行脚本时,出现此错误消息“ RecursionError:超出最大递归深度”。 有人可以对我说什么问题。先感谢您!!

from requests import get
from bs4 import BeautifulSoup
import pandas as pd

# Create list objects for each information section
C_date = []
C_title = []
C_text = []
U_name = []
U_id = []
C_count = []
R_name = []
R_date = []
R_text = []

# Get 10 links for preview of complaints
def getLinks(url):
    response = get(url)
    html_soup = BeautifulSoup(response.text, 'html.parser')
    c_containers = html_soup.find_all('div', class_='media')
    # Store wanted links in a list
    allLinks = []

    for link in c_containers:
        find_tag = link.find('a')
        find_links = find_tag.get('href')
        full_link = "".join((url, find_links))
        allLinks.append(full_link)
    # Get total number of links
    print(len(allLinks))
    return allLinks

def GetData(Each_Link):
    each_complaint_page = get(Each_Link)
    html_soup = BeautifulSoup(each_complaint_page.text, 'html.parser')
    # Get date of complaint
    dt = html_soup.main.find('span')
    date = dt['title']
    C_date.append(date)
    # Get Title of complaint
    TL = html_soup.main.find('h1', {'class': 'title'})
    Title = TL.text
    C_title.append(Title)
    # Get main text of complaint
    Tx = html_soup.main.find('div', {'class': 'description'})
    Text = Tx.text
    C_text.append(Text)
    # Get user name and id
    Uname = html_soup.main.find('span', {'class': 'user'})
    User_name = Uname.span.text
    User_id = Uname.attrs['data-memberid']
    U_name.append(User_name)
    U_id.append(User_id)
    # Get view count of complaint
    Vcount = html_soup.main.find('span', {'view-count-detail'})
    View_count = Vcount.text
    C_count.append(View_count)
    # Get reply for complaint
    Rpnm = html_soup.main.find('h4', {'name'})
    Reply_name = Rpnm.next
    R_name.append(Reply_name)
    # Get reply date
    Rpdt = html_soup.main.find('span', {'date-tips'})
    Reply_date = Rpdt.attrs['title']
    R_date.append(Reply_date)
    # Get reply text
    Rptx = html_soup.main.find('p', {'comment-content-msg company-comment-msg'})
    Reply_text = Rptx.text
    R_text.append(Reply_text)


link_list = getLinks('https://www.sikayetvar.com/arcelik')

for i in link_list:
    z = GetData(i)
    print(z)

PS:我的下一步是将所有信息放入数据框

1 个答案:

答案 0 :(得分:1)

您的GetData()方法调用自身,没有基本情况:这将导致无限递归:

def GetData(data):
    for i in GetData(data):

您也在呼叫response = get(i),但随后忽略了结果...也许您是想说

def GetData(link):
    i = get(link)
    ...