具有大量输入的大型数据集和模型的批量培训

时间:2019-02-24 11:58:49

标签: python keras

我有一个具有50个输入(x1至x50)和1个输出的keras模型。我面临在Keras中对多个大文件进行培训的常见问题,这些文件总之太大而无法容纳GPU内存。

最初,我正在尝试:

x1 = np.load('x1_train.npy')
x2 = np.load('x2_train.npy')
x3 = np.load('x3_train.npy')
x4 = np.load('x4_train.npy')
x5 = np.load('x5_train.npy')
x6 = np.load('x6_train.npy')

y_train = pd.read_csv("train_labels.csv")

然后使用以下方法拟合数据:

model.fit([x1,x2,x3,x4,x5,x6], y_train, validation_data = ([x1_val,x2_val,x3_val,x4_val,x5_val,x6_val],y_validate), epochs = 15, batch_size = 20, verbose = 2)

但是可用的RAM不足以容纳数据并因此崩溃。

现在我正在做

def generate_batches(batch_size):
  while True:
    x1 = np.load('x1_train.npy')
    x2 = np.load('x2_train.npy')
    x3 = np.load('x3_train.npy')
    x4 = np.load('x4_train.npy')
    x5 = np.load('x5_train.npy')
    x6 = np.load('x6_train.npy')

    y_train = pd.read_csv("train_labels.csv")

    for cbatch in range(0, x1.shape[0], batch_size):
      i = cbatch + batch_size
         yield ([x1[cbatch:i,:,:],x2[cbatch:i,:,:],x3[cbatch:i,:,:],x4[cbatch:i,:,:],x5[cbatch:i,:,:],x6[cbatch:i,:,:]], y_train[cbatch:i])

我计划使用fit_generator进行模型拟合,但是上面的代码仍然崩溃。

x1x2 ... x50的形状均为(77156, 30, 50, 1)

0 个答案:

没有答案