张量流中的Blas SGEMM错误

时间:2017-11-13 20:24:47

标签: tensorflow

当我尝试运行我的模型时,我得到了

InternalError (see above for traceback): Blas SGEMM launch failed : m=3686400, n=2, k=1

在Windows 10下运行,Cuda 8.0,带有gpu wih计算兼容性3.0(Ge Force 760m)

代码:

 with tf.name_scope('input'):
        x = tf.placeholder(tf.float16, shape=[None, 720, 1280, 1], name='x-input')
        y_ = tf.placeholder(tf.uint8, shape=[None, ], name='y-input')

    y_onehot = tf.one_hot(y_, depth=2)

    conv1 = tf.layers.conv2d(
        inputs=x,
        filters=1,
        kernel_size=[5, 5],
        padding="same",
        activation=tf.nn.relu)

    pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)

    conv2 = tf.layers.conv2d(
        inputs=pool1,
        filters=2,
        kernel_size=[1, 1],
        padding="same",
        activation=tf.nn.relu)

    logits = tf.reduce_mean(conv2, axis=[1, 2])

    y = tf.argmax(logits, axis=1)

    loss_op = tf.losses.softmax_cross_entropy(onehot_labels=y_onehot, logits=logits)
    train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss_op, global_step=tf.train.get_global_step())

    acc_op = tf.metrics.accuracy(labels=y_, predictions=tf.cast(y, tf.uint8))

1 个答案:

答案 0 :(得分:0)

由于某些原因不支持float16,请改用float32