如何在WHERE子句中定义输入日期(MS Access DB)

时间:2017-02-03 15:32:14

标签: sql date ms-access where-clause

我有以下问题。目前我正在使用这个SQL语句:

SELECT ID, MAX(Datum) AS DateField, Assets
FROM dbTable
WHERE Datum>=DateAdd("m",-12,Date())
GROUP BY Datum, ID, Assets
ORDER BY Datum DESC;

我需要用户输入日期的最后12个月,但我不知道如何在WHERE子句中定义输入(例如#01.01.2017#)?!

这意味着当用户选择01/01/2017时,结果必须是去年的所有12个月。

3 个答案:

答案 0 :(得分:0)

替换

Date()

Forms!MyForm!MyCombo.Value

答案 1 :(得分:0)

按Group By Datum选择Max(Datum)没有意义。因此:

    fileName = 'inputFile.csv'
    logs_path = 'log_path'
    try_epochs = 1
    sampling_size = 3
    TS = 479
    TL = 6

    rDefaults = [[0.02] for row in range((TS+TL))]

    def read_from_csv(filename_queue):
        reader = tf.TextLineReader(skip_header_lines=False)
        _, csv_row = reader.read(filename_queue)
        data = tf.decode_csv(csv_row, record_defaults=rDefaults)
        features = tf.slice(data, [0], [TS])
        label = tf.slice(data, [TS], [TL])  
        return features, label

    def input_pipeline(batch_size, num_epochs=None):
        filename_queue = tf.train.string_input_producer([fileName], num_epochs=num_epochs, shuffle=False)  
        example, label = read_from_csv(filename_queue)
        example_batch, label_batch = tf.train.batch(
            [example, label], 
            batch_size=batch_size)
        return example_batch, label_batch

    x = tf.placeholder(tf.float32, [None, TS], name='pl_one')
    W = tf.Variable(tf.random_normal([TS, TL], stddev=1), name='weights')
    b = tf.Variable(tf.random_normal([TL], stddev=1), name='biaes')
    y = tf.matmul(x, W) + b
    y_ = tf.placeholder(tf.float32, [None, TL], name='pl_two')

    examples, labels = input_pipeline(sampling_size, try_epochs)

    # this one causes the issue
    with tf.name_scope('Features'):
        features = examples
    # this one also causes the issue
    with tf.name_scope('Labels'):
        labDisp = labels    
    with tf.name_scope('Model'):
        myModel = tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y)
    with tf.name_scope('Loss'):
        lossFn = tf.reduce_mean(myModel)
    with tf.name_scope('Optimizer'):
        train_step = tf.train.AdamOptimizer(.05).minimize(lossFn)

    a1 = tf.summary.histogram("Features", features)
    a2 = tf.summary.histogram("Labels", labDisp)
    a3 = tf.summary.histogram("Model", myModel)
    a4 = tf.summary.scalar("Loss", lossFn)

    merged_summary_op = tf.summary.merge([a1, a2, a3, a4])

    with tf.Session() as sess:
        gInit = tf.global_variables_initializer().run()
        lInit = tf.local_variables_initializer().run()

        summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())

        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(coord=coord)

        try:
            while not coord.should_stop():
                example_batch, label_batch = sess.run([examples, labels])  
                act = tf.argmax(label_batch, 1)
                fit = tf.argmax(y, 1)
                _, pAct, pFit, l, summary = sess.run([train_step, act, fit, lossFn, merged_summary_op], feed_dict={x: example_batch, y_: label_batch})
                summary_writer.add_summary(summary, i)
                print(pAct)
                print(pFit)

        except tf.errors.OutOfRangeError:
            print('Finished')
        finally:
            coord.request_stop()
        coord.join(threads)

答案 2 :(得分:0)

好的,我解决了以下问题:

SELECT ID, Datum, Assets
FROM dbTable
WHERE Datum>=DateAdd("m",-12,#01/01/2017#) and Datum<=#01/01/2017# and ID=325
GROUP BY Datum, ID, Assets
ORDER BY Datum DESC;