Python3 Pandas Dataframe KeyError问题

时间:2018-11-30 09:31:21

标签: python python-3.x pandas

我有一个Dataframe爬网,如下所示: enter image description here

当我运行这段代码

crawl_stats = (
crawls['updated']
    .groupby(crawls.index.get_level_values('url'))
    .agg({
        'number of crawls': 'count', 
        'proportion of updates': 'mean', 
        'number of updates': 'sum'
    })

它显示错误:

KeyError                                  Traceback (most recent call last)
<ipython-input-62-180f1041465d> in <module>
      8 crawl_stats = (
      9     crawls['updated']
---> 10         .groupby(crawls.index.get_level_values('url'))
     11         # .groupby('url')
     12         .agg({

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pandas/core/indexes/base.py in _get_level_values(self, level)
   3155         """
   3156 
-> 3157         self._validate_index_level(level)
   3158         return self
   3159 

/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pandas/core/indexes/base.py in _validate_index_level(self, level)
   1942         elif level != self.name:
   1943             raise KeyError('Level %s must be same as name (%s)' %
-> 1944                            (level, self.name))
   1945 
   1946     def _get_level_number(self, level):

KeyError: 'Level url must be same as name (None)'

然后我尝试了修改后的代码:

crawl_stats = (
crawls['updated']
    # .groupby(crawls.index.get_level_values('url'))
    .groupby('url')
    .agg({
        'number of crawls': 'count', 
        'proportion of updates': 'mean', 
        'number of updates': 'sum'
    })

它也显示错误:

KeyError                                  Traceback (most recent call last)
<ipython-input-63-8c5f0f6f7c86> in <module>
      9     crawls['updated']
     10         # .groupby(crawls.index.get_level_values('url'))
---> 11         .groupby('url')
     12         .agg({
     13             'number of crawls': 'count',       
3293             # Add key to exclusions

    KeyError: 'url'

我之前已经尝试过作为堆栈溢出中的其他指导,但是它仍然不起作用。 有人可以帮我解决吗?谢谢!

这是我创建Dataframe爬网的代码。

def make_crawls_dataframe(crawl_json_records):
    """Creates a Pandas DataFrame from the given list of JSON records.

    The DataFrame corresponds to the following relation:

        crawls(primary key (url, hour), updated)

    Each hour in which a crawl happened for a page (regardless of
    whether it found a change) should be represented.  `updated` is
    a boolean value indicating whether the check for that hour found
    a change.

    The result is sorted by URL in ascending order and **further**
    sorted by hour in ascending order among the rows for each URL.

    Args:
      crawl_json_records (list): A list of JSON objects such as the
                                 crawl_json variable above.

    Returns:
      DataFrame: A table whose schema (and sort order) is described
                 above.
    """
    url = []
    hour = []
    updated = []


    # To get the 1000 url, number of checks and positive checks
    for i in range(len(crawl_json_records)):
        temp_url = [crawl_json_records[i]['url']]
        temp_len = crawl_json_records[i]["number of checks"]
        temp_checks = crawl_json_records[i]["positive checks"]

        # url.append(temp_url*temp_len)
        for item0 in temp_url*temp_len:
            url.append(item0)
        # hour.append(list(range(1,temp_len+1)))
        for item1 in list(range(1,temp_len+1)):
            hour.append(item1)
        temp_updated = [0]*temp_len

        for item in temp_checks:
            temp_updated[item-1] = 1
            # updated.append(temp_updated)
        for item2 in temp_updated:
            updated.append(item2)

    # print('len(url):',len(url))
    # 521674
    # print('len(hour):',len(hour))
    # print('len(updated):',len(updated))
    # Above 3 is 521674
    #print(type(temp_len))
    #print(temp_len)
    #print(temp_url*temp_len)

    columns = ['url','hour','updated']
    data = np.array((url,hour,updated)).T
    df = pd.DataFrame(data=data, columns=columns)
    df.index += 1
    # df.index = df['url']
    return df.sort_values(by=['url','hour'], ascending=True)

crawls = make_crawls_dataframe(crawl_json)
crawls.head(50)  # crawls shows as the image

2 个答案:

答案 0 :(得分:0)

您需要替换以下内容:

.groupby(crawls.index.get_level_values('url'))

具有:

.groupby('url')

因为您的DataFrame中没有索引。

答案 1 :(得分:0)

有两个问题-需要按列url分组,还需要为具有聚合功能的新列名称定义元组列表:

crawls = pd.DataFrame({
    'url': ['a','a','a','a','b','b','b'],
    'updated': list(range(7))
})
print (crawls)
  url  updated
0   a        0
1   a        1
2   a        2
3   a        3
4   b        4
5   b        5
6   b        6

d = [('number of crawls', 'count'), 
     ('proportion of updates', 'mean'), 
     ('number of updates', 'sum')]
crawl_stats = crawls.groupby('url')['updated'].agg(d)
print (crawl_stats)
     number of crawls  proportion of updates  number of updates
url                                                            
a                   4                    1.5                  6
b                   3                    5.0                 15

编辑:

儿子数字列的问题应该转换为numpy数组,最好是创建字典并传递给DataFrame构造子:

更改:

columns = ['url','hour','updated']
data = np.array((url,hour,updated)).T
df = pd.DataFrame(data=data, columns=columns)

收件人:

columns = ['url','hour','updated']
df = pd.DataFrame({'url':url, 'hour':hour,'updated':updated}, columns=columns)