Python numpy没有正确地乘以矩阵。

时间:2016-11-02 08:19:51

标签: python numpy matrix neural-network feature-extraction

我试图从神经网络中提取这些y值。目前的问题似乎是numpy,而不是像我预期的那样倍增矩阵。我已将代码和输出包含在您的评论中。提前感谢您提供的任何见解。

def columnToRow(column):
    newarray = np.array([column])
    return newarray


def calcIndividualOutput(indivInputs,weights,biases):
  # finds the resulting y values for one set of input data
  I_transposed= columnToRow(indivInputs)
  output = np.multiply(I_transposed, weights) + biases
  return output


def getOutputs(inputs,weights,biases):
  # iterates over each set of inputs to find corresponding outputs 
  # returns output matrix
  i_len = len(inputs)-1
  outputs = []
  for i in range(0,i_len):
    result = calcIndividualOutput(inputs[i],weights,biases)
    outputs.append(np.tanh(result))
    if (i==i_len):
      print("Final Input reached:", i)
  return outputs



# Test Single line of Outputs should
#print("Resulting Outputs0:\n\n",resultingOutputs[0,0:])

# Testing 
currI=data[0]
Itrans=columnToRow(currI)
print(" THE CURRENT I0\n\n",currI,"\n\n")
print("transposed I:\n\n",Itrans,"\n\n")
print("Itrans shape:\n\n",Itrans.shape,"\n\n")

print("Current biases:\n\n",model_l1_b,"\n\n")
print("Current biases shape:\n\n",model_l1_b.shape,"\n\n")
print("B trans:",b_trans,"\n\n")
print("B trans shape:",b_trans.shape,"\n\n")

print("Current weights:\n\n",model_l1_W,"\n\n")
print("Transposed weights\n\n",w_transposed,"\n\n")
print("wtrans shape:\n\n",w_transposed.shape,"\n\n")



#Test calcIndividualOutput

testOutput= calcIndividualOutput(currI,w_transposed,b_trans)
print("Test calcIndividualOutput:\n\n",testOutput,"\n\n")
print("Test calcIndividualOutput Shape:\n\n",testOutput.shape,"\n\n")

# Transpose weights to match dimensions of input
b_trans=columnToRow(model_l1_b)
w_transposed=np.transpose(model_l1_W)
resultingOutputs = getOutputs(data,w_transposed,b_trans)

输出:

THE CURRENT I0

 [-0.66399151 -0.59143853  0.5230611  -0.52583802 -0.31089544  0.47396523
 -0.7301591  -0.21042131  0.92044264 -0.48792791 -1.54127669] 


transposed I:

 [[-0.66399151 -0.59143853  0.5230611  -0.52583802 -0.31089544  0.47396523
  -0.7301591  -0.21042131  0.92044264 -0.48792791 -1.54127669]] 


Itrans shape:

 (1, 11) 


Current biases:

 [ 0.04497563 -0.01878226  0.03285328  0.00443657 -0.10408497  0.03982726
 -0.07724283] 


Current biases shape:

 (7,) 


B trans: [[ 0.04497563 -0.01878226  0.03285328  0.00443657 -0.10408497  0.03982726
  -0.07724283]] 


B trans shape: (1, 7) 


Current weights:

 [[ 0.02534341  0.01163373 -0.20102289  0.23845847  0.20859972 -0.09515963
   0.00744185 -0.06694793 -0.03806938  0.02241485  0.34134269]
 [ 0.0828636  -0.14711063  0.44623381  0.0095899   0.41908434 -0.25378567
   0.35789928  0.21531652 -0.05924326 -0.18556432  0.23026766]
 [-0.23547475 -0.18090464 -0.15210266  0.10483326 -0.0182989   0.52936584
   0.15671678 -0.64570689 -0.27296376  0.28720504  0.21922119]
 [-0.17561196 -0.42502806 -0.34866759 -0.07662395 -0.02361901 -0.10330012
  -0.2626377   0.19807351  0.20543958 -0.34499851  0.29347673]
 [-0.04404973 -0.31600055 -0.22984107  0.21733086 -0.15065287  0.18301299
   0.13399698  0.11884601  0.04380761 -0.03720044  0.0146924 ]
 [ 0.25086868  0.15678053  0.30350113  0.13065964 -0.30319506  0.47015968
   0.00549904  0.32486886 -0.00331726  0.22858304  0.16789439]
 [-0.10196115 -0.03687141 -0.28674102  0.01066647  0.2475083   0.15808311
  -0.1452509   0.09170815 -0.14578934 -0.07375327 -0.16524883]] 


Transposed weights

 [[ 0.02534341  0.0828636  -0.23547475 -0.17561196 -0.04404973  0.25086868
  -0.10196115]
 [ 0.01163373 -0.14711063 -0.18090464 -0.42502806 -0.31600055  0.15678053
  -0.03687141]
 [-0.20102289  0.44623381 -0.15210266 -0.34866759 -0.22984107  0.30350113
  -0.28674102]
 [ 0.23845847  0.0095899   0.10483326 -0.07662395  0.21733086  0.13065964
   0.01066647]
 [ 0.20859972  0.41908434 -0.0182989  -0.02361901 -0.15065287 -0.30319506
   0.2475083 ]
 [-0.09515963 -0.25378567  0.52936584 -0.10330012  0.18301299  0.47015968
   0.15808311]
 [ 0.00744185  0.35789928  0.15671678 -0.2626377   0.13399698  0.00549904
  -0.1452509 ]
 [-0.06694793  0.21531652 -0.64570689  0.19807351  0.11884601  0.32486886
   0.09170815]
 [-0.03806938 -0.05924326 -0.27296376  0.20543958  0.04380761 -0.00331726
  -0.14578934]
 [ 0.02241485 -0.18556432  0.28720504 -0.34499851 -0.03720044  0.22858304
  -0.07375327]
 [ 0.34134269  0.23026766  0.21922119  0.29347673  0.0146924   0.16789439
  -0.16524883]] 


wtrans shape:

 (11, 7) 


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-162-7e8be1d52690> in <module>()
     48 #Test calcIndividualOutput
     49 
---> 50 testOutput= calcIndividualOutput(currI,w_transposed,b_trans)
     51 print("Test calcIndividualOutput:\n\n",testOutput,"\n\n")
     52 print("Test calcIndividualOutput Shape:\n\n",testOutput.shape,"\n\n")

<ipython-input-162-7e8be1d52690> in calcIndividualOutput(indivInputs, weights, biases)
      7   # finds the resulting y values for one set of input data
      8   I_transposed= columnToRow(indivInputs)
----> 9   output = np.multiply(I_transposed, weights) + biases
     10   return output
     11 

ValueError: operands could not be broadcast together with shapes (1,11) (11,7) 

3 个答案:

答案 0 :(得分:0)

np.multiply用于按元素方式乘以数组,但是根据数据的维度,我猜你正在寻找矩阵乘法。要使用np.dot

答案 1 :(得分:0)

点积映射R ^ n x R ^ n - &gt; R,这可能就是你想要的。 如果你来自Matlab,那就像A * B和A。* B

一样

答案 2 :(得分:0)

我认为您正在寻找np.matmul(a,b) 这是我们在数学中实际执行的逐行实际乘法。 因此,如果a = AxB尺寸而b = BxC尺寸,则 res = np.matmul(a,b)的形状为AxC。