我想在python的tensorflow中使用稀疏张量进行训练。我找到了很多代码来做到这一点,但是没有一个起作用。
下面的示例代码显示了我的意思,它引发了错误:
import numpy as np
x_vals = tf.sparse.SparseTensor([[0, 0], [0, 1], [1, 2]], [1, 2, 1], [2, 3])
#x_vals = tf.sparse.to_dense(x_vals) #this line decides, if there is an error
y_vals = np.array([0, 1])
layer_args = lambda : None
layer_args.input_shape = (3,)
layer_args.activation = "sigmoid"
layer_args.use_bias = False
model = tf.keras.models.Sequential(tf.keras.layers.Dense(1, **layer_args.__dict__))
model.compile(loss = "mse")
model.fit(x_vals, y_vals)
错误是:
ValueError: The two structures don't have the same nested structure.
...以及巨大的堆栈跟踪
答案 0 :(得分:0)
好的,我知道了它是如何工作的。最简单的解决方案是使用生成器:
from random import shuffle
def data_generator(x_vals, y_vals):
inds = list(range(x_vals.shape[0]))
shuffle(inds)
for ind in inds:
yield (x_vals[ind, :].todense(), y_vals[ind])
然后将生成器用于合适的x值:
model.fit(data_generator(x_vals, y_vals))
但是它非常慢。同样,您一次只能训练一个纪元,并且有很多无法使用的keras功能。可能也是tensorflow.keras.utils.Sequence:
class SparseSequence(tf.keras.utils.Sequence):
def __init__(self, x_vals, y_vals, batch_size = 32):
self.x_vals = x_vals
self.y_vals = y_vals
self.inds = list(range(x_vals.shape[0]))
shuffle(self.inds)
self.batch_size = batch_size
def __getitem__(self, item):
from_ind = self.batch_size * item
to_ind = self.batch_size * (item + 1)
return (self.x_vals[self.inds[from_ind:to_ind], :].todense(),
y_vals[self.inds[from_ind:to_ind]])
def on_epoch_end(self):
shuffle(self.inds)
def __len__(self):
return math.ceil(self.x_vals.shape[0] / self.batch_size)
然后在fit函数中使用它:
model.fit(SparseSequence(x_vals, y_vals))
请记住,首先需要将数据转换为scipy csr稀疏矩阵,否则代码将无法工作。还请记住不要在Model.fit()中使用“ y”关键字。