循环页面以使用BeautifulSoup进行抓取

时间:2020-07-25 14:19:10

标签: python pandas web-scraping beautifulsoup data-cleaning

我的单页刮刀:

import requests
import pandas as pd
from bs4 import BeautifulSoup

url = 'https://www.cvbankas.lt/?padalinys%5B0%5D=76&page=1'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')

all_data = []
for h3 in soup.select('h3.list_h3'):
    job_title = h3.get_text(strip=True)
    company = h3.find_next(class_="heading_secondary").get_text(strip=True)
    salary = h3.find_next(class_="salary_amount").get_text(strip=True)
    location = h3.find_next(class_="list_city").get_text(strip=True)
    print('{:<50} {:<15} {:<15} {}'.format(company, salary, location, job_title))

    all_data.append({
        'Job Title': job_title,
        'Company': company,
        'Salary': salary,
        'Location': location
    })

df = pd.DataFrame(all_data)
df.to_csv('data.csv')

#tips = sns.load_dataset('data.csv')
#print(tips)

给我一​​个csv文件,但只有50行。 我想抓取所有页面,本来想在HTML代码'class=':'prev_next'中找到,但BACK和FORWARD相同,只是href不同。因此,我决定进行范围循环并使用它来更改页面:

import requests
import pandas as pd
from bs4 import BeautifulSoup

#url = 'https://www.cvbankas.lt/?padalinys%5B0%5D=76&page=1'
#soup = BeautifulSoup(requests.get(url).content, 'html.parser')
all_data = []
for i in range(1, 9):
    url = 'https://www.cvbankas.lt/?padalinys%5B0%5D=76&page='+str(i)
    print(url)
    soup = BeautifulSoup(requests.get(url).content, 'html.parser')
    for h3 in soup.select('h3.list_h3'):
        try:
            job_title = h3.get_text(strip=True)
            company = h3.find_next(class_="heading_secondary").get_text(strip=True)
            salary = h3.find_next(class_="salary_amount").get_text(strip=True)
            location = h3.find_next(class_="list_city").get_text(strip=True)
            print('{:<50} {:<15} {:<15} {}'.format(company, salary, location, job_title))
        except AttributeError:
            
            all_data.append({
                    'Job Title': job_title,
                    'Company': company,
                    'Salary': salary,
                    'Location': location
                })
        
df = pd.DataFrame(all_data)
df.to_csv('data.csv')

运行代码后,它仅保存5行,因此比我只刮一页的代码少10倍。

您将如何循环页面?页面从18

还有如何清理薪水对象?因为它是包含Nuo 2700Iki 2500之一或具有两个数字的字符串,例如1000-3000。因为我想使用Salary列作为整数,所以我可以对Seaborn进行一些绘图。

1 个答案:

答案 0 :(得分:0)

您已缩进到all_data块内的except列表中。因此,仅在出现异常时,控件才进入except。运行以下脚本会在csv文件中产生大约365行

import requests
import pandas as pd
from bs4 import BeautifulSoup

#url = 'https://www.cvbankas.lt/?padalinys%5B0%5D=76&page=1'
#soup = BeautifulSoup(requests.get(url).content, 'html.parser')
all_data = []
for i in range(1, 9):
    url = 'https://www.cvbankas.lt/?padalinys%5B0%5D=76&page='+str(i)
    print(url)
    soup = BeautifulSoup(requests.get(url).content, 'html.parser')
    for h3 in soup.select('h3.list_h3'):
        try:
            job_title = h3.get_text(strip=True)
            company = h3.find_next(class_="heading_secondary").get_text(strip=True)
            salary = h3.find_next(class_="salary_amount").get_text(strip=True)
            location = h3.find_next(class_="list_city").get_text(strip=True)
            print('{:<50} {:<15} {:<15} {}'.format(company, salary, location, job_title))
            all_data.append({
                    'Job Title': job_title,
                    'Company': company,
                    'Salary': salary,
                    'Location': location
                })
        except AttributeError:
            pass
            
        
df = pd.DataFrame(all_data)
df.to_csv('data.csv')