同时从图中请求多个值

时间:2016-02-19 18:12:34

标签: tensorflow

在下面的代码中,l2令人惊讶地返回与l1相同的值,但由于在l2之前的列表中请求了优化器,我预计损失将是训练后的新损失。我可以不从图表中同时请求多个值并期望输出一致吗?

if ((isset($_POST["MM_update"])) && ($_POST["MM_update"] == "form1")) {
  $updateSQL = sprintf("UPDATE cursosxcliente SET idCliente=%s, idCurso=%s, tiempo=%s, valor_curso=%s, congelado=%s, notas_cliente=%s, Fecha_inicio=%s, Fecha_exp=%s,Fecha_inicio_congelacion=%s,Fecha_fin_congelacion=%s WHERE idCursoXCliente=%s",
                       GetSQLValueString($_POST['idCliente'], "int"),
                       GetSQLValueString($_POST['idCurso'], "text"),
                       GetSQLValueString($_POST['tiempo'], "text"),
                       GetSQLValueString($_POST['valor_curso'], "int"),
                       GetSQLValueString($_POST['congelado'], "int"),
                       GetSQLValueString($_POST['notas_cliente'], "text"),
                       GetSQLValueString($_POST['Fecha_inicio'], "text"),
                       GetSQLValueString($_POST['Fecha_exp'], "text"),
                       GetSQLValueString($_POST['Fecha_inicio_congelacion'], "text"),
                       GetSQLValueString($_POST['Fecha_fin_congelacion'], "text"),
                       GetSQLValueString($_POST['idCursoXCliente'], "int"));
$hoy = date("Y-m-d");
$decongelar = explode("-",$row_actualizarCursoCliente['Fecha_fin_congelacion']);
if ($descongelar[0] == $hoy[0] && $descongelar[1] == $hoy[1] && $descongelar[2] == $hoy[2] && $row_actualizarCursoCliente['congelado']=="1") {
    $updateSQLs = sprintf("UPDATE cursosxcliente SET congelado=0,Fecha_inicio_congelacion=default,Fecha_fin_congelacion=default WHERE idCursoXCliente=%s",
    GetSQLValueString($_POST['congelado'], "int"),
    GetSQLValueString($_POST['Fecha_inicio_congelacion'], "text"),
    GetSQLValueString($_POST['Fecha_fin_congelacion'], "text"));
}
  mysql_select_db($database_connectBD_fc, $connectBD_fc);
  $Result1 = mysql_query($updateSQL,$updateSQLs, $connectBD_fc) or die(mysql_error());
  $updateGoTo = "exito-act-curs-cliente.php";
  if (isset($_SERVER['QUERY_STRING'])) {
    $updateGoTo .= (strpos($updateGoTo, '?')) ? "&" : "?";
    $updateGoTo .= $_SERVER['QUERY_STRING'];
  }
  header(sprintf("Location: %s", $updateGoTo));
}

3 个答案:

答案 0 :(得分:10)

否 - 您在列表中请求它们的顺序对评估顺序没有影响。对于具有副作用的操作(如优化程序),如果要保证特定的排序,则需要使用with_dependencies或类似的控制流构造来强制执行。通常,忽略副作用,TensorFlow会在计算后立即从图中抓取节点,从而将结果返回给您 - 显然,在优化器之前计算损失,因为优化程序需要损失是其投入之一。 (请记住,'loss'不是变量;它是张量;因此它实际上并不受优化器步骤的影响。)

sess.run([loss, optimizer], ...)

sess.run([optimizer, loss], ...)

是等价的。

答案 1 :(得分:4)

我已经使用三种session.run测试了在tensorflow中实现的逻辑回归。运行:

  1. 一起

      

    res1,res2,res3 = sess.run([op1,op2,op3])

  2. 分别

      

    res1 = sess.run(op1)

         

    res2 = sess.run(op2)

         

    res3 = sess.run(op3)

  3. 带有依赖关系

      

    使用tf.control_dependencies([op1]):

         

    op2_after = tf.identity(op1)

         

    op3_after = tf.identity(op1)

         

    res1,res2,res3 = session.run([op1,op2_after,op3_after])

  4. 将批量大小设置为10000,结果为:

    1: 0.05+ secs < 2: 0.11+ secs < 3: 0.25+ secs
    

    1和3之间的主要区别只是一个小批量。使用3代替1可能不值得。

    这是测试代码(它是由其他人编写的LR示例......)。

    以下是data

    #!/usr/bin/env python2
    # -*- coding: utf-8 -*-
    """
    Created on Fri Jun  2 13:38:14 2017
    
    @author: inse7en
    """
    
    from __future__ import print_function
    import numpy as np
    import tensorflow as tf
    from six.moves import cPickle as pickle
    import time
    
    pickle_file = '/Users/inse7en/Downloads/notMNIST.pickle'
    with open(pickle_file, 'rb') as f:
      save = pickle.load(f)
      train_dataset = save['train_dataset']
      train_labels = save['train_labels']
      valid_dataset = save['valid_dataset']
      valid_labels = save['valid_labels']
      test_dataset = save['test_dataset']
      test_labels = save['test_labels']
      del save  # hint to help gc free up memory
      print('Training set', train_dataset.shape, train_labels.shape)
      print('Validation set', valid_dataset.shape, valid_labels.shape)
      print('Test set', test_dataset.shape, test_labels.shape)
    
    
    image_size = 28
    num_labels = 10
    
    def reformat(dataset, labels):
      dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
      # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
      labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
      return dataset, labels
    train_dataset, train_labels = reformat(train_dataset, train_labels)
    valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
    test_dataset, test_labels = reformat(test_dataset, test_labels)
    print('Training set', train_dataset.shape, train_labels.shape)
    print('Validation set', valid_dataset.shape, valid_labels.shape)
    print('Test set', test_dataset.shape, test_labels.shape)
    
    # This is to expedite the process
    train_subset = 10000
    # This is a good beta value to start with
    beta = 0.01
    
    graph = tf.Graph()
    with graph.as_default():
        # Input data.
        # They're all constants.
        tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
        tf_train_labels = tf.constant(train_labels[:train_subset])
        tf_valid_dataset = tf.constant(valid_dataset)
        tf_test_dataset = tf.constant(test_dataset)
    
        # Variables
        # They are variables we want to update and optimize.
        weights = tf.Variable(tf.truncated_normal([image_size * image_size, num_labels]))
        biases = tf.Variable(tf.zeros([num_labels]))
    
        # Training computation.
        logits = tf.matmul(tf_train_dataset, weights) + biases
        # Original loss function
        loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
        # Loss function using L2 Regularization
        regularizer = tf.nn.l2_loss(weights)
        loss = tf.reduce_mean(loss + beta * regularizer)
    
        # Optimizer.
        optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
    
        # Predictions for the training, validation, and test data.
        train_prediction = tf.nn.softmax(logits)
        valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)
        test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
    
        num_steps = 50
    
    
        def accuracy(predictions, labels):
            return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
                    / predictions.shape[0])
    
    
        with tf.Session(graph=graph) as session:
            # This is a one-time operation which ensures the parameters get initialized as
            # we described in the graph: random weights for the matrix, zeros for the
            # biases.
            tf.initialize_all_variables().run()
            print('Initialized')
            for step in range(num_steps):
                # Run the computations. We tell .run() that we want to run the optimizer,
                # and get the loss value and the training predictions returned as numpy
                # arrays.
                #_, l, predictions = session.run([optimizer, loss, train_prediction])
    
                start_time = time.time()
                with tf.control_dependencies([optimizer]):
                    loss_after_optimizer = tf.identity(loss)
                    predictions_after = tf.identity(train_prediction)
                    regularizers_after = tf.identity(regularizer)
    
    
                _, l, predictions,regularizers = session.run([optimizer, loss_after_optimizer, predictions_after, regularizers_after])
    
                print("--- with dependencies: %s seconds ---" % (time.time() - start_time))
                #start_time = time.time()
                #opt = session.run(optimizer)
                #l = session.run(loss)
                #predictions = session.run(train_prediction)
                #regularizers = session.run(regularizer)
    
                #print("--- run separately: %s seconds ---" % (time.time() - start_time))
    
                #start_time = time.time()
                #_, l, predictions,regularizers = session.run([optimizer, loss, train_prediction, regularizer])
    
                #print("--- all together: %s seconds ---" % (time.time() - start_time))
    
                #if (step % 100 == 0):
                    #print('Loss at step {}: {}'.format(step, l))
                    #print('Training accuracy: {:.1f}'.format(accuracy(predictions,
                                                                      #train_labels[:train_subset, :])))
                    # Calling .eval() on valid_prediction is basically like calling run(), but
                    # just to get that one numpy array. Note that it recomputes all its graph
                    # dependencies.
    
                    # You don't have to do .eval above because we already ran the session for the
                    # train_prediction
                    #print('Validation accuracy: {:.1f}'.format(accuracy(valid_prediction.eval(),
                                                                        #valid_labels)))
            #print('Test accuracy: {:.1f}'.format(accuracy(test_prediction.eval(), test_labels)))
            #print(regularizer)
    

答案 2 :(得分:3)

作为Dave points outSession.run()的参数顺序对评估顺序没有影响,并且示例中的loss张量不依赖于{{ 1}} op。要添加依赖项,您可以使用tf.control_dependencies()在获取损失之前为运行的优化程序添加显式依赖项:

optimizer