在角度js中ng-repeat块内的变量范围是多少

时间:2016-10-24 10:15:21

标签: angularjs

showDetails变量的范围是什么。它是否仅限于它自己的li或它会影响ul中的所有li。 有关完整代码,请参阅http://jsfiddle.net/asmKj/

unsigned char subset =  (a & (1 << 0))       |
                       ((a & (1 << 2)) >> 1) |
                       ((a & (1 << 4)) >> 2) |
                       ((a & (1 << 6)) >> 3);

3 个答案:

答案 0 :(得分:1)

在ng-repeat的情况下,如果你创建或使用像 showDetails 字段那样的变量,它将为每个元素类型创建一个单独的cope,在这种情况下它将有一个对于每个$scope.showDetails

现在要对其进行测试,您可以创建一个与train_subset = 10000 num_labels = 5 data_size = 42 graph = tf.Graph() with graph.as_default(): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. tf_train_dataset = tf.constant(train_set[:train_subset, :]) tf_train_labels = tf.constant(train_labels[:train_subset]) tf_valid_dataset = tf.constant(valid_set) tf_test_dataset = tf.constant(test_set) beta_regul = tf.placeholder(tf.float32) # Variables. # These are the parameters that we are going to be training. The weight # matrix will be initialized using random values following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable( tf.truncated_normal([data_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + beta_regul * tf.nn.l2_loss(weights) # Optimizer. # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. # These are not part of training, but merely here so that we can report # accuracy figures as we train. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) 同名的范围变量,并将其默认设置为 true ,然后运行它。您将看到所有详细信息在加载时都可见,但是当您再次单击时,它只会影响依赖项。

首先,它将为每个li元素提供一个变量,并为其提供范围变量的值。

检查此fiddel Fiddel

答案 1 :(得分:0)

这意味着过程变量仅限于控制器示例,它只能在此控制器中访问。这提供了与控制器的同步链接以供查看。

答案 2 :(得分:0)

仅限于自己的李,它不会影响ul下的其他李。