如何使输入的文本框值最多为两个小数点

时间:2017-07-20 06:09:28

标签: php regex

<div class="form-group">
 <?php
  $data = array('name' => 'Basicvalue','value' => set_value('Basicvalue'),'id'=>'Basicvalue',  'class' => 'form-control' ,'readonly' => 'true');
  echo form_input(($data));
  ?>
</div>  

这是我的文本框,当我输入输入时,它应显示两位小数。我怎么能这样做?

例如:如果我输入11,那么该字段应显示为11.00

4 个答案:

答案 0 :(得分:1)

如果您需要在客户端执行操作,请使用以下Jquery函数:

<div class="form-group">
     <?php
      $data = array('name' => 'Basicvalue','value' => set_value('Basicvalue'),'id'=>'Basicvalue',  'class' => 'form-control changetodecimal' ,'readonly' => 'true');
    echo form_input(($data));
      ?>
  </div>  

<script>
$(document).on('change',".changetodecimal", function(){ 
    var inputvalues = $(this).val();
    var resultvalues=parseFloat(inputvalues).toFixed(2); 
    $(this).val(resultvalues);

});
</script>

如果您希望服务器端(PHP)使用以下功能:

echo number_format($number, 2);

答案 1 :(得分:1)

您可以使用number_format():

def train(input_batch, input_batch_length, target_batch, target_batch_length, batch_size):

    # Zero gradients of both optimizers
    encoderchar_optimizer.zero_grad()
    encoder_optimizer.zero_grad()
    decoder_optimizer.zero_grad()

    encoder_input = Variable(torch.FloatTensor(len(input_batch), batch_size, 500))

    for ix , w in enumerate(input_batch): 
        w = w.contiguous().view(15, batch_size)
        reshaped_input_length = [x[ix] for x in input_batch_length] # [15 ,.. 30 times] * 128
        if USE_CUDA: 
            w = w.cuda()
            #reshaped_input_length =  Variable(torch.LongTensor(reshaped_input_length)).cuda()
        hidden_all , output = encoderchar(w, reshaped_input_length)
        encoder_input[ix] = output.transpose(0,1).contiguous().view(batch_size, -1)
        if USE_CUDA: 
            encoder_input = encoder_input.cuda()

    temporary_target_batch_length = [15] * batch_size

    encoder_hidden_all, encoder_output = encoder(encoder_input, target_batch_length)
    decoder_input = Variable(torch.LongTensor([SOS_token] * batch_size))
    decoder_hidden = encoder_output

    max_target_length = max(temporary_target_batch_length)
    all_decoder_outputs = Variable(torch.zeros(max_target_length, batch_size, decoder.output_size))

    # Move new Variables to CUDA
    if USE_CUDA:
        decoder_input = decoder_input.cuda()
        all_decoder_outputs = all_decoder_outputs.cuda()
        target_batch =  target_batch.cuda()

    # Run through decoder one time step at a time
    for t in range(max_target_length):
        decoder_output, decoder_hidden, decoder_attn = decoder(
            decoder_input, decoder_hidden, encoder_hidden_all
        )

        all_decoder_outputs[t] = decoder_output
        decoder_input = target_batch[t] # Next input is current target
        if USE_CUDA:
            decoder_input = decoder_input.cuda()

    # Loss calculation and backpropagation
    loss = masked_cross_entropy(
        all_decoder_outputs.transpose(0, 1).contiguous(), # -> batch x seq
        target_batch.transpose(0, 1).contiguous(), # -> batch x seq
        target_batch_length
    )
    loss.backward()

    # Clip gradient norms
    ecc = torch.nn.utils.clip_grad_norm(encoderchar.parameters(), clip)
    ec = torch.nn.utils.clip_grad_norm(encoder.parameters(), clip)
    dc = torch.nn.utils.clip_grad_norm(decoder.parameters(), clip)

    # Update parameters with optimizers
    encoderchar_optimizer.step()
    encoder_optimizer.step()
    decoder_optimizer.step()

    return loss.data[0], ec, dc

示例:

RuntimeError                              Traceback (most recent call last)
<ipython-input-10-9778e12ded02> in <module>()
     11         data_target_batch_index= Variable(torch.LongTensor(data_target_batch_index)).transpose(0,1)
     12         # Send the data for training
---> 13         loss, ar1, ar2 = train(data_input_batch_index, data_input_batch_length, data_target_batch_index, data_target_batch_length, batch_size)
     14 
     15         # Keep track of loss

<ipython-input-8-9c71c385f8cd> in train(input_batch, input_batch_length, target_batch, target_batch_length, batch_size)
     54         target_batch_length
     55     )
---> 56     loss.backward()
     57 
     58     # Clip gradient norms

/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/autograd/variable.py in backward(self, gradient, retain_variables)
    144                     'or with gradient w.r.t. the variable')
    145             gradient = self.data.new().resize_as_(self.data).fill_(1)
--> 146         self._execution_engine.run_backward((self,), (gradient,), retain_variables)
    147 
    148     def register_hook(self, hook):

/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/autograd/function.py in _do_backward(self, gradients, retain_variables)
    207     def _do_backward(self, gradients, retain_variables):
    208         self.retain_variables = retain_variables
--> 209         result = super(NestedIOFunction, self)._do_backward(gradients, retain_variables)
    210         if not retain_variables:
    211             del self._nested_output

/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/autograd/function.py in backward(self, *gradients)
    215     def backward(self, *gradients):
    216         nested_gradients = _unflatten(gradients, self._nested_output)
--> 217         result = self.backward_extended(*nested_gradients)
    218         return tuple(_iter_None_tensors(result))
    219 

/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/nn/_functions/rnn.py in backward_extended(self, grad_output, grad_hy)
    314             grad_hy,
    315             grad_input,
--> 316             grad_hx)
    317 
    318         if any(self.needs_input_grad[1:]):

/home/ubuntu/anaconda3/envs/tensorflow/lib/python3.6/site-packages/torch/backends/cudnn/rnn.py in backward_grad(fn, input, hx, weight, output, grad_output, grad_hy, grad_input, grad_hx)
    371                 hidden_size, dcy.size()))
    372         if not dhy.is_cuda or not dy.is_cuda or (dcy is not None and not dcy.is_cuda):
--> 373             raise RuntimeError('Gradients aren\'t CUDA tensors')
    374 
    375         check_error(cudnn.lib.cudnnRNNBackwardData(

RuntimeError: Gradients aren't CUDA tensors

答案 2 :(得分:0)

使用number_format($ yournymber,2);

<?php
    $num=3;

    $decimalNum=number_format($num,2);

    echo $decimalNum;
?>

您将获得输出3.00

答案 3 :(得分:0)

在PHP中,您需要使用number_format()函数:

number_format($num,2);
echo number_format(11,2); // 11.00

数字格式化为小数。在上面的示例中,第二个参数定义了您希望发布小数的值。