使用python和pandas按季节分组数据

时间:2014-03-24 16:36:27

标签: python csv pandas

我想使用Pandas和Python迭代我的.csv文件,并按季节对数据进行分组,计算一年中每个季节的平均值。目前季度剧本是Jan-Mar,Apr-Jun等。我希望季节与月份相关 - 11:'冬季',12:'冬季',1:'冬季',2:'春天',3:'春天',4:'春天',5:'夏天',6:'夏天',7:'夏天',\ 8:'秋天',9:'秋天',10:'秋天'

我有以下数据:

Date,HAD
01/01/1951,1
02/01/1951,-0.13161201
03/01/1951,-0.271796132
04/01/1951,-0.258977158
05/01/1951,-0.198823057
06/01/1951,0.167794502
07/01/1951,0.046093808
08/01/1951,-0.122396694
09/01/1951,-0.121824587
10/01/1951,-0.013002463

到目前为止,这是我的代码:

# Iterate through a list of files in a folder looking for .csv files
for csvfilename in glob.glob("C:/Users/n-jones/testdir/output/*.csv"):

# Allocate a new file name for each file and create a new .csv file
    csvfilenameonly = "RBI-Seasons-Year" + path_leaf(csvfilename) 
    with open("C:/Users/n-jones/testdir/season/" + csvfilenameonly, "wb") as outfile:

        # Open the input csv file and allow the script to read it
        with open(csvfilename, "rb") as infile:

            # Create a pandas dataframe to summarise the data
            df = pd.read_csv(infile, parse_dates=[0], index_col=[0], dayfirst=True)

            mean = df.resample('Q-SEP', how='mean')

            # Output to new csv file
            mean.to_csv(outfile)

我希望这是有道理的。

提前谢谢!

1 个答案:

答案 0 :(得分:1)

看起来你只需要一个dict查找和一个groupby。下面的代码应该有效。

import pandas as pd
import os
import re

lookup = {
    11: 'Winter',
    12: 'Winter',
    1: 'Winter',
    2: 'Spring',
    3: 'Spring',
    4: 'Spring',
    5: 'Summer',
    6: 'Summer',
    7: 'Summer',
    8: 'Autumn',
    9: 'Autumn',
    10: 'Autumn'
}

os.chdir('C:/Users/n-jones/testdir/output/')

for fname in os.listdir('.'):
    if re.match(".*csv$", fname):
        data = pd.read_csv(fname, parse_dates=[0], dayfirst=True)
        data['Season'] = data['Date'].apply(lambda x: lookup[x.month])
        data['count'] = 1
        data = data.groupby(['Season'])['HAD', 'count'].sum()
        data['mean'] = data['HAD'] / data['count']
        data.to_csv('C:/Users/n-jones/testdir/season/' + fname)