如何修复熊猫中的“无法设置列不匹配的行”错误

时间:2019-04-21 22:14:17

标签: python-3.x pandas dataframe beautifulsoup

我正在为我的一个项目创建网络刮板。我的确是在网上抓取工作。我能够获取所需的所有数据。现在,我在创建数据框以将其保存到CSV文件时遇到问题。

我已经搜索了错误,并尝试了许多可能的解决方案,但是我一直收到相同的错误。感谢有关代码或错误问题的任何建议。谢谢

ValueError: cannot set a row with mismatched columns

import requests
import bs4
from bs4 import BeautifulSoup

import pandas as pd
import time


max_results_per_city = 30

city_set = ['New+York','Chicago']
columns = ["city", "job_title", "company_name", "location", "summary"]

database = pd.DataFrame(columns = columns)

for city in city_set:
    for start in range(0, max_results_per_city, 10):
        page = requests.get('https://www.indeed.com/jobs?q=computer+science&l=' + str(city) + '&start=' + str(start))
        time.sleep(1)
        soup = BeautifulSoup(page.text, "lxml")
        for div in soup.find_all(name="div", attrs={"class":"row"}):
            num = (len(sample_df) + 1)
            job_post = []
            job_post.append(city)
            for a in div.find_all(name="a", attrs={"data-tn-element":"jobTitle"}):
                job_post.append(a["title"])
            company = div.find_all(name="span", attrs={"class":"company"})
            if len(company) > 0:
                for b in company:
                    job_post.append(b.text.strip())
            else:
                sec_try = div.find_all(name="span", attrs={"class":"result-link-source"})
                for span in sec_try:
                    job_post.append(span.text)

            c = div.findAll('div', attrs={'class': 'location'})
            for span in c:
                 job_post.append(span.text)
            d = div.findAll('div', attrs={'class': 'summary'})
            for span in d:
                job_post.append(span.text.strip())
            database.loc[num] = job_post
            database.to_csv("test.csv")


2 个答案:

答案 0 :(得分:0)

此问题是由于#列不匹配数据量(至少一行)导致的。

我看到许多问题:“ sample_df”在哪里初始化,在“数据库”中添加数据的地方是弹出的大问题。

我将重组您的代码job_post看起来像您的行级别列表。我会使用追加到表级列表的方式,因此在每个循环的末尾点击table.append(job_post)而不是sample_df.loc[num] = job_post

然后,您可以在循环后致电Dataframe(table, columns=columns)

注意:请确保在刮板找不到数据时添加“无”,“空”或“”,否则行长度将与列长度不匹配,这就是导致错误的原因。

答案 1 :(得分:0)

再现您的代码,它没有提取location并且缩进database在错误的位置。因此,修复c = div.findAll(name='span', attrs={'class': 'location'}) 。这是一个使其正常工作的修复程序:

database = []

for city in city_set:
    for start in range(0, max_results_per_city, 10):
        page = requests.get('https://www.indeed.com/jobs?q=computer+science&l=' + str(city) + '&start=' + str(start))
        time.sleep(1)
        soup = BeautifulSoup(page.text, "lxml")
        for div in soup.find_all(name="div", attrs={"class":"row"}):
            #num = (len(sample_df) + 1)
            job_post = []
            job_post.append(city)
            for a in div.find_all(name="a", attrs={"data-tn-element":"jobTitle"}):
                job_post.append(a["title"])
            company = div.find_all(name="span", attrs={"class":"company"})
            if len(company) > 0:
                for b in company:
                    job_post.append(b.text.strip())
            else:
                sec_try = div.find_all(name="span", attrs={"class":"result-link-source"})
                for span in sec_try:
                    job_post.append(span.text)

            c = div.findAll(name='span', attrs={'class': 'location'})
            for span in c:
                 job_post.append(span.text)
            d = div.findAll('div', attrs={'class': 'summary'})
            for span in d:
                job_post.append(span.text.strip())
        database.append(job_post)

df00=pd.DataFrame(database)
df00.shape


df00.columns=columns
df00.to_csv("test.csv",index=False)

Web scrap