在tensorflow中使用稀疏输入训练简单的线性模型

时间:2018-11-11 07:15:29

标签: python-3.x tensorflow machine-learning

我正在尝试训练一个简单的线性模型,并为其提供模拟的稀疏输入矩阵。我没有错误,但是模型没有学习。我的第一个调试是打印logit,这表明我在操作方面做得不好,所以我有一个输出矩阵而不是向量(我想我正在执行外积,可能是我错误定义了形状): / p>

import tensorflow as tf
import numpy as np
import pandas as pd
from scipy.sparse import coo_matrix
from sklearn.datasets import make_blobs
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split

samples = 800
# getting datasets
X_values, y_flat = make_blobs(n_features=2, n_samples=samples, centers=3, random_state=500)
y = OneHotEncoder().fit_transform(y_flat.reshape(-1, 1)).todense()
y = np.array(y)    
X_train, X_test, y_train, y_test, y_train_flat, y_test_flat = train_test_split(X_values, y, y_flat)

X_test += np.random.randn(*X_test.shape) * 1.5

n_features = X_values.shape[1]
n_classes = len(set(y_flat))

weights_shape = (n_features, n_classes)
bias_shape = (1, n_classes)

b = tf.Variable(dtype=tf.float32, initial_value=tf.random_normal(bias_shape))
W = tf.Variable(dtype=tf.float32, initial_value=tf.random_normal(weights_shape))
x = tf.sparse.placeholder(tf.float32)
Y_true = tf.placeholder(dtype=tf.float32)

Y_pred = tf.sparse.matmul(x, W) + b

loss_function = tf.losses.softmax_cross_entropy(Y_true, Y_pred)
learner = tf.train.GradientDescentOptimizer(0.1).minimize(loss_function)

with tf.Session() as sess:
  sess.run(tf.global_variables_initializer())
  indices = np.vstack([coo_matrix(X_train).row, coo_matrix(X_train).col]).T
  values = coo_matrix(X_train).data
  shape = np.array(coo_matrix(X_train).shape)

  for i in range(100):
      result = sess.run([learner, Y_pred], feed_dict={
           x: tf.SparseTensorValue(indices, values, shape), Y_true: y_train})  # Will succeed.
      if i % 10 == 0:
          print(result)

我一直遵循TF文档中的稀疏矩阵乘法https://www.tensorflow.org/api_docs/python/tf/sparse/matmul,但我无法解决问题。

0 个答案:

没有答案