使用“相同代码”的多​​个文本框

时间:2017-06-24 14:15:15

标签: c# textbox uwp

我的UWP应用程序有一个包含10x3文本框的页面,供用户输入。 然后我在他们之间进行计算并在其他3个人中进行计算。

我现在有第一个上班的人。 但所有其他人都是“相同的代码”。 有没有更聪明的方法为所有文本框执行此操作,而不是使用不同的文本框名称一次又一次地编写代码。

以下是第一行文本框的代码。

def nonlin(x,deriv=False):
    if(deriv==True):
        return tf.nn.sigmoid(x)*(1 - tf.nn.sigmoid(x))

    return tf.nn.sigmoid(x)


#We start building the computation graph by creating nodes for the input images and target output classes.
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])




#weights and biases
W1 = tf.Variable(tf.random_uniform([784,400])) # 784x10 matrix (because we have 784 input features and 10 outputs)
b1 = tf.Variable(tf.random_uniform([400]))

W2 = tf.Variable(tf.random_uniform([400,30])) # 784x10 matrix (because we have 784 input features and 10 outputs)
b2 = tf.Variable(tf.random_uniform([30]))

W3 = tf.Variable(tf.random_uniform([30,10])) # 784x10 matrix (because we have 784 input features and 10 outputs)
b3 = tf.Variable(tf.random_uniform([10]))


# temporary containers to avoid messing up computations
W1tmp = tf.Variable(tf.zeros([784,400])) # 784x10 matrix (because we have 784 input features and 10 outputs)
b1tmp = tf.Variable(tf.zeros([400]))

W2tmp = tf.Variable(tf.zeros([400,30])) # 400x30 matrix as second layer
b2tmp = tf.Variable(tf.zeros([30]))

W3tmp = tf.Variable(tf.zeros([30,10])) # 30x10 matrix (because we have 10 outputs)
b3tmp = tf.Variable(tf.zeros([10]))




#Before Variables can be used within a session, they must be initialized using that session. 
sess.run(tf.global_variables_initializer())


# multiplication across batch
# The tf.batch_matmul() op was removed in 3a88ec0. You can now use tf.matmul() to perform batch matrix multiplications (i.e. for tensors with rank > 2).

# Forward pass 

y1 = nonlin(tf.matmul(x,W1) + b1)
y2 = nonlin(tf.matmul(y1,W2) + b2)
y3 = nonlin(tf.matmul(y2,W3) + b3)


error3 = y_ - y3 # quadratic cost derivative



# Backward pass 
# error and y have same dimensions. It's only W that is unique
delta3 = tf.multiply(error3,nonlin(y3, deriv=True))  #assign delta



error2 = tf.matmul(delta3,W3, transpose_b=True)
delta2 = tf.multiply(error2,nonlin(y2, deriv=True))

error1 = tf.matmul(delta2,W2, transpose_b=True)
delta1 = tf.multiply(error1,nonlin(y1, deriv=True))



learning_rate = tf.constant(3.0)

# we first assign the deepest level to avoid extra evaluations

#with tf.control_dependencies([y1,y2,y3,delta1,delta2,delta3]): 
w1_assign = tf.assign(W1tmp, tf.add(W1, tf.multiply(learning_rate, tf.reduce_mean(tf.matmul(tf.expand_dims(x,-1), tf.expand_dims(delta1,-1), transpose_b=True), 0)) ))
b1_assign = tf.assign(b1tmp, tf.add(b1, tf.multiply(learning_rate, tf.reduce_mean(delta1, 0)) ))

w2_assign = tf.assign(W2tmp, tf.add(W2, tf.multiply(learning_rate, tf.reduce_mean(tf.matmul(tf.expand_dims(y1,-1), tf.expand_dims(delta2,-1), transpose_b=True), 0)) ))
b2_assign = tf.assign(b2tmp, tf.add(b2, tf.multiply(learning_rate, tf.reduce_mean(delta2, 0)) ))

w3_assign = tf.assign(W3tmp, tf.add(W3, tf.multiply(learning_rate, tf.reduce_mean(tf.matmul(tf.expand_dims(y2,-1), tf.expand_dims(delta3,-1), transpose_b=True), 0)) ))
b3_assign = tf.assign(b3tmp, tf.add(b3, tf.multiply(learning_rate, tf.reduce_mean(delta3, 0)) ))


w1_ok = tf.assign(W1,W1tmp)
w2_ok = tf.assign(W2,W2tmp)
w3_ok = tf.assign(W3,W3tmp)

b1_ok = tf.assign(b1,b1tmp)
b2_ok = tf.assign(b2,b2tmp)
b3_ok = tf.assign(b3,b3tmp)



#accuracy evaluation

correct_prediction = tf.equal(tf.argmax(y3,1), tf.argmax(y_,1)) #a list of booleans.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))



# we can use only single batch, just to check that everything works
#batch = mnist.train.next_batch(1000)

for epoch in range(10000):
  batch = mnist.train.next_batch(1000)
  #train_step.run(feed_dict={x: batch[0], y_: batch[1]})


  #When you call sess.run([x, y, z]) once, TensorFlow executes each op that those tensors depend on one time only (unless there's a tf.while_loop() in your graph). If a tensor appears twice in the list (like mul in your example), TensorFlow will execute it once and return two copies of the result. To run the assignment more than once, you must either call sess.run() multiple times, or use tf.while_loop() to put a loop in your graph.

  # write new variable values to containers
  sess.run([w1_assign,w2_assign,w3_assign,b1_assign,b2_assign,b3_assign] , feed_dict={x: batch[0], y_: batch[1]})

  # write container contents into variables in a separate session
  sess.run([w1_ok,w2_ok,w3_ok,b1_ok,b2_ok,b3_ok])# , feed_dict={x: batch[0], y_: batch[1]})

  # precision computation

  print(str(accuracy.eval(feed_dict={x: batch[0], y_: batch[1]})) + " / epoch: " + str(epoch))  # evaluate

如果你们有一种更简单的方式去做我在那里做的事情,请随意告诉。我是一个完整的初学者..

提前致谢

1 个答案:

答案 0 :(得分:0)

由于您使用基本相同的逻辑处理每个TextBox(即UIElement.LostFocus)的相同事件,因此您可以使用单个方法执行此操作,该方法的签名与对应于该事件的委托的签名匹配。

<TextBox x:Name="TextBox1" LostFocus="TextBox_LostFocus" />
<TextBox x:Name="TextBox2" LostFocus="TextBox_LostFocus" />
<TextBox x:Name="TextBox3" LostFocus="TextBox_LostFocus" />

上面的XAML代码定义了三个TextBox,这样它们触发的LostFocus事件由代码隐藏中的单个方法TextBox_LostFocus处理,

private void TextBox_LostFocus(object sender, RoutedEventArgs e)
{
    var textBoxThatLostFocus = (TextBox)sender;
}

参数sender是其事件被处理的控件(在本例中为TextBox),并且可以通过如上所述将其转换来访问其属性,从而通过其引用控件名字不必要。

编辑:正如其他人所指出的那样,学习MVVM设计模式将对开发结构良好,可维护且可测试的Windows应用程序有很大帮助,更不用说它会实现你的目标是&#34;在他们之间进行计算并在另外3个人中显示出来&#34;很容易。