为什么tensorflow模型文件的大小取决于数据集的大小?

时间:2019-03-05 12:32:33

标签: tensorflow

在保存了1万个句子的数据集后,我保存的模型的.index,.meta和.data文件的大小分别为3KB,58MB和375MB

保持网络架构不变,并在100K句子的数据集上对其进行训练,文件大小分别为3KB,139MB和860MB

我认为它的大小取决于数据集的大小。 According to this answer,因为神经网络的架构是相同的,所以文件的大小应独立于数据集的大小。

为什么尺寸如此巨大?

我还想知道这些文件除了链接的答案中提到的内容之外还包含什么。

这些文件是否包含与训练历史相关的信息,例如每一步的损失值等?

2 个答案:

答案 0 :(得分:0)

培训摘要包含在您的事件文件中。

答案 1 :(得分:0)

import tensorflow as tf
from tensorflow.python.training import checkpoint_utils as cp
cp.list_variables('./model.ckpt-12520')

运行上面的代码片段将给出以下输出

[('Variable', []), ('decoder/attention_wrapper/attention_layer/kernel', [600, 300]), ('decoder/attention_wrapper/attention_layer/kernel/Adam', [600, 300]), ('decoder/attention_wrapper/attention_layer/kernel/Adam_1', [600, 300]), ('decoder/attention_wrapper/bahdanau_attention/attention_b', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_b/Adam', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_b/Adam_1', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_g', []), ('decoder/attention_wrapper/bahdanau_attention/attention_g/Adam', []), ('decoder/attention_wrapper/bahdanau_attention/attention_g/Adam_1', []), ('decoder/attention_wrapper/bahdanau_attention/attention_v', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_v/Adam', [300]), ('decoder/attention_wrapper/bahdanau_attention/attention_v/Adam_1', [300]), ('decoder/attention_wrapper/bahdanau_attention/query_layer/kernel', [300, 300]), ('decoder/attention_wrapper/bahdanau_attention/query_layer/kernel/Adam', [300, 300]), ('decoder/attention_wrapper/bahdanau_attention/query_layer/kernel/Adam_1', [300, 300]), ('decoder/attention_wrapper/basic_lstm_cell/bias', [1200]), ('decoder/attention_wrapper/basic_lstm_cell/bias/Adam', [1200]), ('decoder/attention_wrapper/basic_lstm_cell/bias/Adam_1', [1200]), ('decoder/attention_wrapper/basic_lstm_cell/kernel', [900, 1200]), ('decoder/attention_wrapper/basic_lstm_cell/kernel/Adam', [900, 1200]), ('decoder/attention_wrapper/basic_lstm_cell/kernel/Adam_1', [900, 1200]), ('decoder/dense/kernel', [300, 49018]), ('decoder/dense/kernel/Adam', [300, 49018]), ('decoder/dense/kernel/Adam_1', [300, 49018]), ('decoder/memory_layer/kernel', [300, 300]), ('decoder/memory_layer/kernel/Adam', [300, 300]), ('decoder/memory_layer/kernel/Adam_1', [300, 300]), ('embeddings', [49018, 300]), ('embeddings/Adam', [49018, 300]), ('embeddings/Adam_1', [49018, 300]), ('loss/beta1_power', []), ('loss/beta2_power', []), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam_1', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam_1', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam_1', [600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam', [450, 600]), ('stack_bidirectional_rnn/cell_0/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam_1', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/bias/Adam_1', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/bw/basic_lstm_cell/kernel/Adam_1', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/bias/Adam_1', [600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam', [450, 600]), ('stack_bidirectional_rnn/cell_1/bidirectional_rnn/fw/basic_lstm_cell/kernel/Adam_1', [450, 600])]

我意识到embeddings变量正在存储单词embeddings,这说明这些文件的大小增加了

cp.load_variable('./model.ckpt-12520', 'embeddings')