我收到了以下警告:
94: UserWarning: Converting sparse IndexedSlices to a dense Tensor with 1200012120 elements. This may consume a large amount of memory.
以下代码:
from wordbatch.extractors import WordSeq
import wordbatch
from keras.layers import Input,Embedding
...
wb = wordbatch.WordBatch(normalize_text, extractor=(WordSeq, {"seq_maxlen": MAX_NAME_SEQ}), procs=NUM_PROCESSOR)
wb.dictionary_freeze = True
full_df["ws_name"] = wb.fit_transform(full_df["name"])
...
name = Input(shape=[full_df["name"].shape[1]], name="name")
emb_name = Embedding(MAX_TEXT, 50)(name)
...
那就是我使用来自GRU网络的嵌入层的WordSeq(来自WordBatch)输出。如何修改代码使其无需转换为密集张量?
答案 0 :(得分:0)
我在Keras中的Embedding层遇到了同样的问题。 The solution用于显式使用TensorFlow优化器,如下所示:
model.compile(loss='mse',
optimizer=TFOptimizer(tf.train.GradientDescentOptimizer(0.1)))