我有大量文件,如下所示:
05/31 / 2012,15:30:00.029,1306.25,1,E,0,...,1306.25
05/31 / 2012,15:30:00.029,1306.25,8,E,0,...,1306.25
我可以使用以下内容轻松阅读它们:
pd.read_csv(gzip.open("myfile.gz"), header=None,names=
["date","time","price","size","type","zero","empty","last"], parse_dates=[[0,1]])
有没有办法有效地将这样的日期解析成pandas时间戳?如果没有,是否有任何编写可以传递给date_parser =?
的cython函数的指南我尝试编写自己的解析器函数,但我正在处理的项目仍然需要很长时间。
答案 0 :(得分:7)
我使用以下cython代码获得了令人难以置信的加速(50倍):
从python调用: timestamps = convert_date_cython(df [“date”]。values,df [“time”]。values)
cimport numpy as np
import pandas as pd
import datetime
import numpy as np
def convert_date_cython(np.ndarray date_vec, np.ndarray time_vec):
cdef int i
cdef int N = len(date_vec)
cdef out_ar = np.empty(N, dtype=np.object)
date = None
for i in range(N):
if date is None or date_vec[i] != date_vec[i - 1]:
dt_ar = map(int, date_vec[i].split("/"))
date = datetime.date(dt_ar[2], dt_ar[0], dt_ar[1])
time_ar = map(int, time_vec[i].split(".")[0].split(":"))
time = datetime.time(time_ar[0], time_ar[1], time_ar[2])
out_ar[i] = pd.Timestamp(datetime.datetime.combine(date, time))
return out_ar
答案 1 :(得分:7)
以前solution of Michael WS的改进:
pandas.Timestamp
最好在Cython代码之外执行atoi
处理native-c字符串比python funcs快一点datetime
- lib调用次数从2减少到1(偶尔为+1)NB!此代码中的日期顺序为日/月/年。
总而言之,代码似乎比原始convert_date_cython
快大约10倍。但是如果在read_csv
之后调用它,那么在SSD硬盘驱动器上,由于读取开销,总时间只有几个百分点。我猜想在常规硬盘上,差异会更小。
cimport numpy as np
import datetime
import numpy as np
import pandas as pd
from libc.stdlib cimport atoi, malloc, free
from libc.string cimport strcpy
### Modified code from Michael WS:
### https://stackoverflow.com/a/15812787/2447082
def convert_date_fast(np.ndarray date_vec, np.ndarray time_vec):
cdef int i, d_year, d_month, d_day, t_hour, t_min, t_sec, t_ms
cdef int N = len(date_vec)
cdef np.ndarray out_ar = np.empty(N, dtype=np.object)
cdef bytes prev_date = <bytes> 'xx/xx/xxxx'
cdef char *date_str = <char *> malloc(20)
cdef char *time_str = <char *> malloc(20)
for i in range(N):
if date_vec[i] != prev_date:
prev_date = date_vec[i]
strcpy(date_str, prev_date) ### xx/xx/xxxx
date_str[2] = 0
date_str[5] = 0
d_year = atoi(date_str+6)
d_month = atoi(date_str+3)
d_day = atoi(date_str)
strcpy(time_str, time_vec[i]) ### xx:xx:xx:xxxxxx
time_str[2] = 0
time_str[5] = 0
time_str[8] = 0
t_hour = atoi(time_str)
t_min = atoi(time_str+3)
t_sec = atoi(time_str+6)
t_ms = atoi(time_str+9)
out_ar[i] = datetime.datetime(d_year, d_month, d_day, t_hour, t_min, t_sec, t_ms)
free(date_str)
free(time_str)
return pd.to_datetime(out_ar)
答案 2 :(得分:2)
日期时间字符串的基数并不大。例如,%H-%M-%S
格式的时间字符串数为24 * 60 * 60 = 86400
。如果数据集的行数远大于此数据,或者您的数据包含大量重复的时间戳,则在解析过程中添加缓存可以大大加快速度。
对于那些没有Cython的人来说,这里是纯python的替代解决方案:
import numpy as np
import pandas as pd
from datetime import datetime
def parse_datetime(dt_array, cache=None):
if cache is None:
cache = {}
date_time = np.empty(dt_array.shape[0], dtype=object)
for i, (d_str, t_str) in enumerate(dt_array):
try:
year, month, day = cache[d_str]
except KeyError:
year, month, day = [int(item) for item in d_str[:10].split('-')]
cache[d_str] = year, month, day
try:
hour, minute, sec = cache[t_str]
except KeyError:
hour, minute, sec = [int(item) for item in t_str.split(':')]
cache[t_str] = hour, minute, sec
date_time[i] = datetime(year, month, day, hour, minute, sec)
return pd.to_datetime(date_time)
def read_csv(filename, cache=None):
df = pd.read_csv(filename)
df['date_time'] = parse_datetime(df.loc[:, ['date', 'time']].values, cache=cache)
return df.set_index('date_time')
使用以下特定数据集,加速比为150x +:
$ ls -lh test.csv
-rw-r--r-- 1 blurrcat blurrcat 1.2M Apr 8 12:06 test.csv
$ head -n 4 data/test.csv
user_id,provider,date,time,steps
5480312b6684e015fc2b12bc,fitbit,2014-11-02 00:00:00,17:47:00,25
5480312b6684e015fc2b12bc,fitbit,2014-11-02 00:00:00,17:09:00,4
5480312b6684e015fc2b12bc,fitbit,2014-11-02 00:00:00,19:10:00,67
在ipython中:
In [1]: %timeit pd.read_csv('test.csv', parse_dates=[['date', 'time']])
1 loops, best of 3: 10.3 s per loop
In [2]: %timeit read_csv('test.csv', cache={})
1 loops, best of 3: 62.6 ms per loop
要限制内存使用,只需将dict缓存替换为类似LRU的内容即可。