如何显示SQL找到MIN值的列的名称?

时间:2016-12-02 23:30:02

标签: postgresql

我有一张产品和价格表。它包含10个不同的价格列。我能够成功找到所有10个价格列的最低值。

所以,根据这些数据:

Prod. Name | Week1  | Week2  | Week N | Week 10
Pizza      | $1.29  | $1.29  | $1.42  | $1.01

我可以显示:

Prod. Name | Lowest Price
Pizza      | $1.01

但是我怎样才能添加另一列来显示最低值的列?

理想输出如下:

Prod. Name | Lowest Price | From Week
Pizza      | $1.01        | 10

我用来显示输出的搜索查询是:

SELECT ProdName, LEAST(d1, d2, d3, d4, d5, d6, d7, d8, d9, d10) FROM results;

修改 我忘了提到我总共使用了大约1,600行数据。这肯定会使编码变得更复杂!

1 个答案:

答案 0 :(得分:0)

您不仅可以比较简单值,还可以比较记录:


    classificator = tf.contrib.learn.Estimator(model_fn=lstm_model(timesteps,rnn_layers,dense_layers))
    validation_monitor = tf.contrib.learn.monitors.ValidationMonitor(X_train_TF, Y_train, 
                                                        every_n_steps=1000, early_stopping_rounds = 1000)
    classificator.fit(X_train_TF, Y_train, monitors = [validation_monitor],
                      batch_size = batch_size, steps = training_steps)
╔═══╤═══╤═══╤═══════╗
║ x │ y │ z │ least ║
╠═══╪═══╪═══╪═══════╣
║ 1 │ 3 │ 2 │ (1,x) ║
║ 5 │ 2 │ 3 │ (2,y) ║
╚═══╧═══╧═══╧═══════╝

如果还不够,那么您可以将其转换为,例如并在单独的行中检索值:

 

def lstm_model(num_utnits, rnn_layers, dense_layers=None, learning_rate=0.1, optimizer='Adagrad'):
    def lstm_cells(layers):
        if isinstance(layers[0], dict):
            return [tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(layer['num_units'],
                                                                          state_is_tuple=True),
                                                  layer['keep_prob'])
                    if layer.get('keep_prob') else tf.nn.rnn_cell.LSTMCell(layer['num_units'],
                                                                           state_is_tuple=True)
                    for layer in layers]
        return [tf.nn.rnn_cell.LSTMCell(steps, state_is_tuple=True) for steps in layers]

    def lstm_layers(input_layers, layers):
        if layers and isinstance(layers, dict):
            return tflayers.stack(input_layers, tflayers.fully_connected,
                                  layers['layers'])  # check later
        elif layers:
            return tflayers.stack(input_layers, tflayers.fully_connected, layers)
        else:
            return input_layers

    def model(X, y):
        stacked_lstm = tf.nn.rnn_cell.MultiRNNCell(lstm_cells(rnn_layers), state_is_tuple=True)
        output, layers = tf.nn.dynamic_rnn(cell=stacked_lstm, inputs=X, dtype=dtypes.float32,)
        output = lstm_layers(output[-1], dense_layers)
        prediction, loss = tflearn.run_n({"outputs": output, "last_states": layers}, n=1,
                                        feed_dict=None)
        train_operation = tflayers.optimize_loss(loss, tf.contrib.framework.get_global_step(), optimizer=optimizer,
                                                 learning_rate=learning_rate)
        return prediction, loss, train_operation

    return model
╔═══╤═══╤═══╤══════════════════════╤═════╤═════╗
║ x │ y │ z │          j           │ val │ fld ║
╠═══╪═══╪═══╪══════════════════════╪═════╪═════╣
║ 1 │ 3 │ 2 │ {"f1": 1, "f2": "x"} │ 1   │ x   ║
║ 5 │ 2 │ 3 │ {"f1": 2, "f2": "y"} │ 2   │ y   ║
╚═══╧═══╧═══╧══════════════════════╧═════╧═════╝

对于您的数据,可能是:

def lstm_model(num_utnits, rnn_layers, dense_layers=None, learning_rate=0.1, optimizer='Adagrad'):
    def lstm_cells(layers):
        if isinstance(layers[0], dict):
            return [tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(layer['num_units'],
                                                                          state_is_tuple=True),
                                                  layer['keep_prob'])
                    if layer.get('keep_prob') else tf.nn.rnn_cell.LSTMCell(layer['num_units'],
                                                                           state_is_tuple=True)
                    for layer in layers]
        return [tf.nn.rnn_cell.LSTMCell(steps, state_is_tuple=True) for steps in layers]

    def lstm_layers(input_layers, layers):
        if layers and isinstance(layers, dict):
            return tflayers.stack(input_layers, tflayers.fully_connected,
                                  layers['layers'])  # check later
        elif layers:
            return tflayers.stack(input_layers, tflayers.fully_connected, layers)
        else:
            return input_layers

    def model(X, y):
        stacked_lstm = tf.nn.rnn_cell.MultiRNNCell(lstm_cells(rnn_layers), state_is_tuple=True)
        output, layers = tf.nn.dynamic_rnn(cell=stacked_lstm, inputs=X, dtype=dtypes.float32,)
        output = lstm_layers(output[-1], dense_layers)
        prediction, loss = tflearn.run_n({"outputs": output, "last_states": layers}, n=1,
                                        feed_dict=None)
        train_operation = tflayers.optimize_loss(loss, tf.contrib.framework.get_global_step(), optimizer=optimizer,
                                                 learning_rate=learning_rate)
        return prediction, loss, train_operation

    return model