我正在使用tf通过FTRLOp训练稀疏数据集的LR模型。代码段如下:
public static ObservableCollection<ClassRoomList> _ClassRoomList = new ObservableCollection<ClassRoomList>(); //_ClassRoomList is set as itemsource for Mainlist
trainables = tf.trainable_variables() grads_and_vars = tf.gradients(损失,可训练的)
输入稀疏且分类,输入有点热并且存储非零索引,例如前两个记录是:
6、10、13
3 9 9
渐变显示:
feature_columns = [
tf.feature_column.categorical_column_with_hash_bucket('query_id',15),
tf.feature_column.categorical_column_with_hash_bucket('ad_id',15),
tf.feature_column.categorical_column_with_hash_bucket('cat_id',15),
]
label_column = tf.feature_column.numeric_column('label', dtype=tf.float32, default_value=0)
columns = feature_columns + [label_column]
cols_to_vars = {}
parsed_example = tf.parse_example(serialized_example, tf.feature_column.make_parse_example_spec(columns))
logits = tf.feature_column.linear_model(
features=parsed_example,
feature_columns=feature_columns,
cols_to_vars=cols_to_vars
)
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=label, logits=logits))
optimizer = tf.train.FtrlOptimizer(learning_rate=0.5, learning_rate_power=-0.5, initial_accumulator_value=0.5, l1_regularization_strength=2, l2_regularization_strength=0.1)
The result of first record only input:
current gradients is: IndexedSlicesValue(values=array([[0.5]], dtype=float32), indices=array([6]), dense_shape=array([15, 1], dtype=int32))
current gradients is: IndexedSlicesValue(values=array([[0.5]], dtype=float32), indices=array([10]), dense_shape=array([15, 1], dtype=int32))
current gradients is: IndexedSlicesValue(values=array([[0.5]], dtype=float32), indices=array([13]), dense_shape=array([15, 1], dtype=int32))
current gradients is: [0.5]
由于第二条记录在6、10、13中没有值,因此我认为在处理完第二条记录后渐变不会改变。似乎与Ftrl论文中的计算不同。
指出任何错误吗?预先感谢
答案 0 :(得分:0)
到目前为止,我至少发现了一个原因。权重通过每批的平均梯度进行更新,这对nn很有利。这里的详细信息https://stats.stackexchange.com/questions/266968/how-does-minibatch-gradient-descent-update-the-weights-for-each-example-in-a-bat