帕格 - 单词/元素之前或之后的空格

时间:2017-05-18 10:47:50

标签: pug

我的这个段落元素包含文本和2个超链接元素。

p
    | Favicon made by
    |
    a(href="http://www.niceandserious.com/", target="_blank") www.niceandserious.com
    |
    | from
    |
    a(href="http://www.flaticon.com/", target="_blank") www.flaticon.com
    | .

这将呈现以下输出。

<p>Favicon made by <a href="http://www.niceandserious.com/" target="_blank">www.niceandserious.com</a> from <a href="http://www.flaticon.com/" target="_blank">www.flaticon.com</a>.</p>

是否可以在没有|的单词/元素之前或之前获得空格?

1 个答案:

答案 0 :(得分:1)

这对我有用。只需在需要空格的地方添加scope_name = "A3C_net" with tf.device(self._device),tf.variable_scope(scope_name) as scope: self.W_conv1, self.b_conv1 = self._conv_variable([8, 8, 4, 16]) self.W_conv2, self.b_conv2 = self._conv_variable([4, 4, 16, 32]) self.W_fc1, self.b_fc1 = self._fc_variable([2592, 256]) # lstm self.lstm = tf.contrib.rnn.BasicLSTMCell(256, state_is_tuple=True, reuse=True)# # weight for policy output layer self.W_fc2, self.b_fc2 = self._fc_variable([256, action_size]) # weight for value output layer self.W_fc3, self.b_fc3 = self._fc_variable([256, 1]) self.s = tf.placeholder("float", [None, 84, 84, 4]) h_conv1 = tf.nn.relu(self._conv2d(self.s, self.W_conv1, 4) + self.b_conv1) h_conv2 = tf.nn.relu(self._conv2d(h_conv1, self.W_conv2, 2) + self.b_conv2) h_conv2_flat = tf.reshape(h_conv2, [-1, 2592]) h_fc1 = tf.nn.relu(tf.matmul(h_conv2_flat, self.W_fc1) + self.b_fc1) h_fc1_reshaped = tf.reshape(h_fc1, [1,-1,256]) # batches steps inputs self.step_size = tf.placeholder(tf.float32, [1]) self.initial_lstm_state0 = tf.placeholder(tf.float32, [1, 256]) self.initial_lstm_state1 = tf.placeholder(tf.float32, [1, 256]) self.initial_lstm_state = tf.contrib.rnn.LSTMStateTuple( self.initial_lstm_state0, self.initial_lstm_state1) lstm_outputs, self.lstm_state = tf.nn.dynamic_rnn(self.lstm, h_fc1_reshaped,initial_state = self.initial_lstm_state,dtype= tf.float32, sequence_length = self.step_size, time_major = False, scope = scope) lstm_outputs = tf.reshape(lstm_outputs, [-1,256]) # policy (output) self.pi = tf.nn.softmax(tf.matmul(lstm_outputs, self.W_fc2) + self.b_fc2) # value (output) v_ = tf.matmul(lstm_outputs, self.W_fc3) + self.b_fc3 self.v = tf.reshape( v_, [-1] ) scope.reuse_variables() self.W_lstm = tf.get_variabl("basic_lstm_cell/weights") self.b_lstm = tf.get_variable("basic_lstm_cell/biases") self.reset_state() 即可。

#{' '}