我正在设计一个堆叠式自动编码器,如果用户不给任何电影评分而不愿意考虑的话,我会尝试根据电影评分来训练我的神经网络
我的训练集运行完美,但是当我运行测试集时,它向我显示此错误
RuntimeError:索引0处的蒙版[1682]的形状与索引0处的索引张量[1,1682]的形状不匹配 我在此处评论的最终测试块出现错误
代码:-
# Auto Encoder
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
# Importing dataset
movies= pd.read_csv('ml-1m/movies.dat',sep ='::', header= None,engine ='python', encoding= 'latin-1')
users= pd.read_csv('ml-1m/users.dat',sep ='::', header= None,engine ='python', encoding= 'latin-1')
ratings = pd.read_csv('ml-1m/ratings.dat',sep ='::', header= None,engine ='python', encoding= 'latin-1')
# preparing the training set and the dataset
training_set =pd.read_csv('ml-100k/u1.base',delimiter ='\t')
training_set =np.array(training_set, dtype= 'int')
test_set =pd.read_csv('ml-100k/u1.test',delimiter ='\t')
test_set =np.array(test_set, dtype= 'int')
# Getting the number of users and movies
# we are taking the maximum no of values from training set and test set
nb_users = int(max(max(training_set[:,0]), max(test_set[:,0])))
nb_movies = int(max(max(training_set[:,1]), max(test_set[:,1])))
# converting the data into an array within users in lines and movies in columns
def convert(data):
new_data = []
for id_users in range(1, nb_users +1):
id_movies = data[:,1][data[:,0]==id_users]#movies id from data
id_ratings = data[:,2][data[:,0]==id_users] #ratings
ratings= np.zeros(nb_movies)
ratings[id_movies-1] = id_ratings # -1 for making it start from 1
new_data.append(list(ratings))
return new_data
training_set =convert(training_set)
test_set =convert(test_set)
# Converting the data into Torch tensor
training_set = torch.FloatTensor(training_set)
test_set = torch.FloatTensor(test_set)
# creating the architecture of the neural network
class SAE(nn.Module):
def __init__(self, ): # after comma it will consider parameters of module ie parent class
super(SAE,self).__init__()#parent class inheritence
self.fc1 = nn.Linear(nb_movies, 20) #20 nodes in hidden layer
self.fc2= nn.Linear(20,10)
self.fc3 = nn.Linear(10,20) #decoding
self.fc4= nn.Linear(20, nb_movies) #decoding
self.activation= nn.Sigmoid()
#self.myparameters= nn.ParameterList(self.fc1,self.fc2,self.fc3,self.fc4,self.activation)
def forward(self, x):
x=self.activation(self.fc1(x))#encoding
x=self.activation(self.fc2(x))#encoding
x=self.activation(self.fc3(x)) #decoding
x=self.fc4(x) #last layer machine understand automaically
return x
sae= SAE()
criterion = nn.MSELoss()
optimizer= optim.RMSprop(sae.parameters(), lr= 0.01 , weight_decay =0.5)
# Training the SAE
nb_epoch = 200
for epoch in range(1, nb_epoch + 1):
train_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = input.clone()
if torch.sum(target.data > 0) > 0:
output = sae(input)
target.require_grad = False
output[target == 0] = 0
loss = criterion(output, target)
mean_corrector = nb_movies/float(torch.sum(target.data > 0) + 1e-10)
loss.backward()
train_loss += np.sqrt(loss.data.item()*mean_corrector)
s += 1.
optimizer.step()
print('epoch: '+str(epoch)+' loss: '+str(train_loss/s))
# Testing the SAE
test_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = Variable(test_set[id_user])
if torch.sum(target.data > 0) > 0:
output = sae(input)
target.require_grad = False
output[target == 0] = 0 # I get error at this line
loss = criterion(output, target)
mean_corrector = nb_movies/float(torch.sum(target.data > 0) + 1e-10)
test_loss += np.sqrt(loss.data.item()*mean_corrector)
s += 1.
print('test loss: '+str(test_loss/s))
答案 0 :(得分:1)
更改:
output[target == 0] = 0 # I get error at this line
收件人:
output[(target == 0).unsqueeze(0)] = 0
原因:
torch.Tensor
返回的target == 0
的形状为[1682]。
(target == 0).unsqueeze(0)
会将其转换为[1, 1682]
答案 1 :(得分:0)
如果您正在训练SAE,则目标是输入的克隆,它通过.unsqueeze(0)函数具有附加的维。
如果您在测试中查看SAE,则目标没有添加的尺寸,因此,请按如下所示修改代码
更改
目标=变量(test_set [id_user])
至
target = Variable(test_set [id_user])。unsqueeze(0)
这样就可以使目标具有张量所需的多于一个的暗度。