Keras - 每个步骤的时间每个时期增加。我的代码中是否存在内存管理问题?

时间:2018-03-10 09:11:47

标签: python tensorflow keras

[从anaconda提示,删除进度条]

Epoch 1/200000
250/250  - 15s 61ms/step - loss: 6.6400 -
 val_loss: 1.0008

Epoch 2/200000
250/250  - 43s 173ms/step - loss: 6.6387 - val_loss: 1.0003

Epoch 3/200000
250/250  - 70s 280ms/step - loss: 6.6201 - val_loss: 0.9997

Epoch 4/200000
250/250  - 94s 377ms/step - loss: 6.6736 - val_loss: 0.9991

Epoch 5/200000
250/250 - 123s 491ms/step - loss: 6.6491 - val_loss: 0.9999

正如您所看到的,每一步所花费的时间每个时期都会增加。不确定为什么每次为模型提供相同格式的数据时都会发生这种情况。

我的Cpu,ram和Gpu使用率都低于30%,但我的GPU RAM总是最大化(7GB)

以下是我的keras代码。

import keras as kr
import h5py as hp
import numpy as np
import tensorflow as tf

#generator
def generator(filename, dataset, batch):
    with hp.File(filename) as h:
        d = h[dataset]
        size = d.size
        i = 0
        while i <= np.ceil(size/batch):
            t = d[i:min((i + 1) * batch, size)]
            _price = np.swapaxes(np.array([t['ask'], t['bid'], t['price']]), 0, 1)
            _time = np.swapaxes(np.array([t['year'], t['month'], t['week'], t['date'], t['day'], t['hour'], t['minute'], t['second'], t['microsecond']]), 0, 1)
            _sma = np.swapaxes(np.array([t['sma 1s'], t['sma 5s'], t['sma 13s'], t['sma 34s'], t['sma 1m'], t['sma 5m'], t['sma 15m'], t['sma 1h'], t['sma 4h'], t['sma 12h'], t['sma 1d'], t['sma 10d']]), 0, 1)
            _aa = t['alpha2']
            i += 1
            yield [_price, _time, _sma], _aa
            if i > size/batch:
                i = 0


#inputs
price = kr.layers.Input(batch_shape=(None,3), name='price')
time = kr.layers.Input(batch_shape=(None,9), name='time')
sma = kr.layers.Input(batch_shape=(None,12), name='sma')

La = kr.layers.Dense(6, input_shape = (None, 3), activation='relu')(price)
Lb = kr.layers.Dense(18, input_shape = (None, 9), activation='relu')(time)
Lc = kr.layers.Dense(40, input_shape = (None, 12), activation='relu')(sma)
L = kr.layers.Concatenate()([La, Lb, Lc])
L = kr.layers.Dense(32, activation='relu')(L)
L = kr.layers.Dense(16, activation='relu')(L)

paa = kr.layers.Dense(1, activation='softsign')(L)

model = kr.Model(inputs=[price, time, sma], outputs=paa)
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
tensorboard = kr.callbacks.TensorBoard(log_dir='./TB', histogram_freq = 0, write_graph = True, batch_size = 10)
model.fit_generator(generator('train.h5', 'usdjpy', 512), steps_per_epoch = 250 , epochs = 200000,
        validation_data = generator('test.h5', 'usdjpy', 512), validation_steps = 10, class_weight = {0: 1.5, 1: 110, -1: 110}, callbacks = [tensorboard])

model.save('firstmodel.h5')

我怀疑一些节点内存没有被正确地转储,有没有人有想法?

-edit

del _price
del _time
del _sma
del _aa

刚试过在生成器循环中添加这个,没有帮助

0 个答案:

没有答案