堆叠从Xarray生成的Dask数组的有效方法

时间:2018-09-12 16:40:29

标签: python dask netcdf python-xarray

因此,我正在尝试读取大量包含水文数据的相对较大的netCDF文件。 NetCDF文件都看起来像这样:

<xarray.Dataset>
Dimensions:         (feature_id: 2729077, reference_time: 1, time: 1)
Coordinates:
  * time            (time) datetime64[ns] 1993-01-11T21:00:00
  * reference_time  (reference_time) datetime64[ns] 1993-01-01
  * feature_id      (feature_id) int32 101 179 181 183 185 843 845 847 849 ...
Data variables:
    streamflow      (feature_id) float64 dask.array<shape=(2729077,), chunksize=(50000,)>
    q_lateral       (feature_id) float64 dask.array<shape=(2729077,), chunksize=(50000,)>
    velocity        (feature_id) float64 dask.array<shape=(2729077,), chunksize=(50000,)>
    qSfcLatRunoff   (feature_id) float64 dask.array<shape=(2729077,), chunksize=(50000,)>
    qBucket         (feature_id) float64 dask.array<shape=(2729077,), chunksize=(50000,)>
    qBtmVertRunoff  (feature_id) float64 dask.array<shape=(2729077,), chunksize=(50000,)>
Attributes:
    featureType:                timeSeries
    proj4:                      +proj=longlat +datum=NAD83 +no_defs
    model_initialization_time:  1993-01-01_00:00:00
    station_dimension:          feature_id
    model_output_valid_time:    1993-01-11_21:00:00
    stream_order_output:        1
    cdm_datatype:               Station
    esri_pe_string:             GEOGCS[GCS_North_American_1983,DATUM[D_North_...
    Conventions:                CF-1.6
    model_version:              NWM 1.2
    dev_OVRTSWCRT:              1
    dev_NOAH_TIMESTEP:          3600
    dev_channel_only:           0
    dev_channelBucket_only:     0
    dev:                        dev_ prefix indicates development/internal me...

我有25年的时间,并且每小时记录一次。因此,总共有大约4 TB的数据。

现在,我只是想获取流量值的季节性平均值(每日和每月)。所以我创建了以下脚本。

import xarray as xr
import dask.array as da
from dask.distributed import Client
import os

workdir = '/path/to/directory/of/files'
files = [os.path.join(workdir, i) for i in os.listdir(workdir)]

client = Client(processes=False, threads_per_worker=4, n_workers=4, memory_limit='750MB')

big_array = []

for i, file in enumerate(files):
    ds = xr.open_dataset(file, chunks={"feature_id": 50000})

    if i == 0:
        print(ds)

    print(ds.streamflow)

    big_array.append(ds.streamflow)

    ds.close()

    if i == 5:
        break

dask_big_array = da.stack(big_array, axis=0)

print(dask_big_array)

ds.streamflow对象在打印时看起来像这样,据我了解,它只是一个Dask数组:

<xarray.DataArray 'streamflow' (feature_id: 2729077)>
dask.array<shape=(2729077,), dtype=float64, chunksize=(50000,)>
Coordinates:
  * feature_id  (feature_id) int32 101 179 181 183 185 843 845 847 849 851 ...
Attributes:
    long_name:    River Flow
    units:        m3 s-1
    coordinates:  latitude longitude
    valid_range:  [       0 50000000]

奇怪的是,当我堆叠数组时,它们似乎失去了我之前应用于它们的分块。当我打印出big_array对象时,我得到了:

dask.array<stack, shape=(6, 2729077), dtype=float64, chunksize=(1, 2729077)>

我遇到的问题是,当我尝试运行此代码时,收到此警告,然后我认为内存过载,因此必须终止该进程。

distributed.worker - WARNING - Memory use is high but worker has no data to store to disk...

所以我想我有几个问题:

  1. 为什么dask数组在堆叠时会丢失分块?
  2. 是否有更有效的方法来堆叠所有这些阵列以并行化此过程?

根据评论,big array是这样的:

[<xarray.DataArray 'streamflow' (feature_id: 2729077)>
dask.array<shape=(2729077,), dtype=float64, chunksize=(50000,)>
Coordinates:
  * feature_id  (feature_id) int32 101 179 181 183 185 843 845 847 849 851 ...
Attributes:
    long_name:    River Flow
    units:        m3 s-1
    coordinates:  latitude longitude
    valid_range:  [       0 50000000], <xarray.DataArray 'streamflow' (feature_id: 2729077)>
dask.array<shape=(2729077,), dtype=float64, chunksize=(50000,)>
Coordinates:
  * feature_id  (feature_id) int32 101 179 181 183 185 843 845 847 849 851 ...
Attributes:
    long_name:    River Flow
    units:        m3 s-1
    coordinates:  latitude longitude
    valid_range:  [       0 50000000], <xarray.DataArray 'streamflow' (feature_id: 2729077)>
dask.array<shape=(2729077,), dtype=float64, chunksize=(50000,)>
Coordinates:
  * feature_id  (feature_id) int32 101 179 181 183 185 843 845 847 849 851 ...
Attributes:
    long_name:    River Flow
    units:        m3 s-1
    coordinates:  latitude longitude
    valid_range:  [       0 50000000], <xarray.DataArray 'streamflow' (feature_id: 2729077)>
dask.array<shape=(2729077,), dtype=float64, chunksize=(50000,)>
Coordinates:
  * feature_id  (feature_id) int32 101 179 181 183 185 843 845 847 849 851 ...
Attributes:
    long_name:    River Flow
    units:        m3 s-1
    coordinates:  latitude longitude
    valid_range:  [       0 50000000], <xarray.DataArray 'streamflow' (feature_id: 2729077)>
dask.array<shape=(2729077,), dtype=float64, chunksize=(50000,)>
Coordinates:
  * feature_id  (feature_id) int32 101 179 181 183 185 843 845 847 849 851 ...
Attributes:
    long_name:    River Flow
    units:        m3 s-1
    coordinates:  latitude longitude
    valid_range:  [       0 50000000], <xarray.DataArray 'streamflow' (feature_id: 2729077)>
dask.array<shape=(2729077,), dtype=float64, chunksize=(50000,)>
Coordinates:
  * feature_id  (feature_id) int32 101 179 181 183 185 843 845 847 849 851 ...
Attributes:
    long_name:    River Flow
    units:        m3 s-1
    coordinates:  latitude longitude
    valid_range:  [       0 50000000]]

1 个答案:

答案 0 :(得分:4)

这里的问题是dask.array.stack()不能将xarray.DataArray对象识别为持有dask数组,因此它将它们全部转换为NumPy数组。这就是您耗尽内存的方式。

您可以通过几种不同的方式来解决此问题:

  1. 在快捷数组列表上调用dask.array.stack(),例如,将big_array.append(ds.streamflow)切换为big_array.append(ds.streamflow.data)
  2. 使用xarray.concat()代替dask.array.stack(),例如,写dask_big_array = xarray.concat(big_array, dim='time')
  3. 使用xarray.open_mfdataset()结合了打开多个文件并将它们堆叠在一起的过程,例如,用xarray.open_mfdataset('/path/to/directory/of/files/*')替换此处的所有逻辑。