我应该如何使身体的整个背景变暗?

时间:2019-02-24 20:21:32

标签: css3

%% Start of Backward Propagation (From Output towards Input)

% Computing Regularization terms
% Regularization Implemented to reduce overfitting and influence of
% weights
% lambda is regularization parameter
% Deltas for Weights and Biases are initialized to random values outside the 
%  loop at the start

% size(DeltaWeightsCell{1,1}) = 8x5

% size(DeltaWeightsCell{1,2}) = 1x8

% size(DeltaBiasCell{1,1}) = 8x1

% size(DeltaBiasCell{1,2}) = 1x1


 L2RegH1 = lambda* DeltaWeightsCell{1,1};     % Regularization for Weights 
                                               % of Hidden Layer

 L2RegO = lambda* DeltaWeightsCell{1,2};      % Regularization for Weights 
                                              %  of Output Layer

 CellL2RegW = {L2RegH1 L2RegO};

 L2RegB1 = lambda*DeltaBiasCell{1,1};         % Regularization for Biases 
                                               %  of Hidden Layer

 L2RegBO = lambda* DeltaBiasCell{1,2};        % Regularization for Biases of 
                                              %  Output Layer

 CellL2RegB = {L2RegB1 L2RegBO};

  % Computing Gradients for Neurons
  %(Gradient provides information regarding magnitude and Direction of error 
  % for a neuron)

  % O = 1 (Number of Outputs)

  % (YOo_p-T(i)) is the error between Network Output and Target

  % ((1+YOo_p)*(1-YOo_p)) = Derivation of Activation Function at Output 

  % (i) =  1:MaximumIterations (implemented using for loop)

  % Gradient of the Output Layer Neuron
  for n= 1:O 
   GradYO(n) = (YOo_p-T(i))*((1+YOo_p)*(1-YOo_p)); % Gradient to be 
                                                     % backpropagated
   end

   % HN = 8 (Hidden layer Neurons)

   %  (YH1o_p(1,q))*(1-YH1o_p(1,q))  = = Derivation of Activation Function 
   %  at Output 

   % WeightsCell{1,2} = Vector of Weights between Output and Hidden layer 
    % (size = 1x8)

   % Gradient of the Hidden layer Neurons
   for q = 1:HN
   GradH1(1,q) = 
           (YH1o_p(1,q))*(1-YH1o_p(1,q))*(GradYO*WeightsCell{1,2}(1,q));
    end

   %    Computing Deltas
   %    (Deltas are computed corresponding to each of the weights in the 
    %     network)
   %    
   %    Deltas for Weights of Hidden layer 1
   %    Delta[x][y]= [x]:index of input ; [y]: index of Neuron
   %    
   % The five inputs are denoted by x1,x2,x3,x4,x5.
   % eta = learning Rate (taken 0.01)

   % For Weights Corresponding to First Input
     for v = 1:HN
      DeltaH11(v,1) = eta* GradH1(v)*x1(i);
     end

    % For Weights Corresponding to Second Input
    for v = 1:HN
     DeltaH12(v,1) = eta* GradH1(v)*x2(i);
     end

     % For Weights Corresponding to Third Input
     for v = 1:HN
      DeltaH13(v,1) = eta* GradH1(v)*x3(i);
      end

      % For Weights Corresponding to Fourth Input
      for v = 1:HN
      DeltaH14(v,1) = eta* GradH1(v)*x4(i);
      end

      % For  Weights Corresponding to Fifth Input
      for v = 1:HN
      DeltaH15(v,1) = eta* GradH1(v)*x5(i);
      end

     % Deltas for Weights of Hidden Layer
     DeltaWeightsCell{1,1} = [DeltaH11 DeltaH12 DeltaH13 DeltaH14 DeltaH15];

      % Deltas for Weights of Output Layer
      for aa = 1:HN
      DeltaWeightsCell{1,2}(aa) = eta*GradYO*YH1o_p(aa);
       end

      % Computing Biases

      % Deltas for Biases of Hidden Layer
      for bb= 1:HN
      DeltaBiasCell{1,1}(bb) = eta*GradH1(bb)*1.0;
      end

      % Delta for Bias of Output layer
      for dd= 1:O
      DeltaBiasCell{1,2}(dd)= eta*GradYO(dd)*1.0;
      end

       % Updating Weights and Biases

      WeightsCell{1,1} = WeightsCell{1,1} + DeltaWeightsCell{1,1} + 
                         CellL2RegW{1,1};

      WeightsCell{1,2} = WeightsCell{1,2} + DeltaWeightsCell{1,2} + 
                         CellL2RegW{1,2};

      BiasCell{1,1} = BiasCell{1,1} + DeltaBiasCell{1,1} + CellL2RegB{1,1};

      BiasCell{1,2} = BiasCell{1,2} + DeltaBiasCell{1,2} + CellL2RegB{1,2};
* {
  margin: 0;
  padding: 0;
}

html,
body {
  box-sizing: border-box;
  height: 100%;
}

.wrapper-overlay {
  background: rgba(72, 70, 82, 0.5);
}

body {
  color: white;
  min-height: 100%;
  min-width: 100%;
  background: url("images/friends.jpg");
  background-size: cover;
  background-repeat: no-repeat;
  background-position: center center;
  position: relative;
}

我制作了背景,使其成为其父元素的100%,但是background:rgba不会使背景的底部变暗。我尝试将背景放在html css中,但是它也不起作用。

2 个答案:

答案 0 :(得分:1)

我了解您想对整个页面应用深色背景。 看来您应该使用min-height属性。这是要使用的基本逻辑:

<style>
    html,body{
        height:100%;
        min-height:100%;
    }
    body{
        background:rgba(72, 70, 82, 0.5);
    }
</style>

<html>
    <body>
    </body>
</html> 

答案 1 :(得分:1)

* {
  margin: 0;
  padding: 0;
}

html,
body {
  box-sizing: border-box;
  height: 100%;
}

.wrapper-overlay {
  background: rgba(72, 70, 82, 0.5);
}

body {
  color: white;
  min-height: 100%;
  min-width: 100%;
  background: url("images/friends.jpg");
  background-size: cover;
  background-repeat: no-repeat;
  background-position: center center;
  position: relative;
}
<html>
<div class="wrapper-overlay">

  <body>
  </body>
</div>

</html>

您的错误是包装覆盖层,它应该是该类的div。