更新到最新版本后,使用fastparquet打开木地板文件的问题

时间:2019-08-04 09:01:14

标签: dask fastparquet

我创建了一堆这样的镶木地板文件:

dd.to_parquet(df, 'dir/of/parquet', partition_on=['month', 'day'], engine='fastparquet')

我以前是这样读取列的子集的:

raw_data_view = dd.read_parquet('data/raw_data_fast_par.par', columns=['@timestamp', 'http_user','dst', 'dst_port', 'http_req_method', 'http_req_header_host', 'http_req_header_referer', 'http_req_header_useragent', 'http_req_secondleveldomain'],
                       engine='fastparquet', filters=[('@timestamp', '>=', np.datetime64(start)), ('@timestamp', '<', np.datetime64(end))])

在更新为dask 2.2.0和最新的fastparquet之前,它可以正常工作。现在,我收到此消息。在执行读取命令时:

RuntimeWarning: Multiple sorted columns found, cannot autodetect index
RuntimeWarning,

,并在调用compute时:

ValueError: The columns in the computed data do not match the columns in the provided metadata
Expected: ['@timestamp', 'http_user', 'dst', 'dst_port', 'http_req_method', 'http_req_header_host', 'http_req_header_referer', 'http_req_header_useragent', 'http_req_secondleveldomain']
Actual:   ['@timestamp', 'http_user', 'dst', 'dst_port', 'http_req_method', 'http_req_header_host', 'http_req_header_referer', 'http_req_header_useragent', 'http_req_secondleveldomain', 'month', 'day']

有什么改变吗?

0 个答案:

没有答案