我在pytorch中运行LSTM,但据我了解,它仅采用序列长度=1。当我将序列长度调整为4或其他数字时,会收到输入和目标长度不匹配的错误。如果我同时重塑输入和目标,则该模型会抱怨它不接受多目标标签。
我的火车数据集有66512行和16839列,目标中有3个类别/类。我想使用200的批量大小和4的序列长度,即在一个序列中使用4行数据。
请告知如何调整我的模型和/或数据,以便能够针对各种序列长度(例如4)运行模型。
batch_size=200
import torch
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
train_target = torch.tensor(train_data[['Label1','Label2','Label3']].values.astype(np.float32))
train_target = np.argmax(train_target, axis=1)
train = torch.tensor(train_data.drop(['Label1','Label2','Label3'], axis = 1).values.astype(np.float32))
train_tensor = TensorDataset(train.unsqueeze(1), train_target)
train_loader = DataLoader(dataset = train_tensor, batch_size = batch_size, shuffle = True)
print(train.shape)
print(train_target.shape)
torch.Size([66512, 16839])
torch.Size([66512])
import torch.nn as nn
class LSTMModel(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(LSTMModel, self).__init__()
# Hidden dimensions
self.hidden_dim = hidden_dim
# Number of hidden layers
self.layer_dim = layer_dim
# Building LSTM
self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
# Readout layer
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# Initialize hidden state with zeros
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_().to(device)
# Initialize cell state
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim).requires_grad_().to(device)
out, (hn, cn) = self.lstm(x, (h0,c0))
# Index hidden state of last time step
out = self.fc(out[:, -1, :])
return out
input_dim = 16839
hidden_dim = 100
output_dim = 3
layer_dim = 1
batch_size = batch_size
num_epochs = 1
model = LSTMModel(input_dim, hidden_dim, layer_dim, output_dim)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
criterion = nn.CrossEntropyLoss()
learning_rate = 0.1
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
print(len(list(model.parameters())))
for i in range(len(list(model.parameters()))):
print(list(model.parameters())[i].size())
6
torch.Size([400, 16839])
torch.Size([400, 100])
torch.Size([400])
torch.Size([400])
torch.Size([3, 100])
torch.Size([3])
for epoch in range(num_epochs):
for i, (train, train_target) in enumerate(train_loader):
# Load data as a torch tensor with gradient accumulation abilities
train = train.requires_grad_().to(device)
train_target = train_target.to(device)
# Clear gradients w.r.t. parameters
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = model(train)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, train_target)
# Getting gradients w.r.t. parameters
loss.backward()
# Updating parameters
optimizer.step()
print('Epoch: {}. Loss: {}. Accuracy: {}'.format(epoch, np.around(loss.item(), 4), np.around(accuracy,4)))
答案 0 :(得分:1)
这最终是有效的-将输入数据重整为4个序列,每个序列具有一个目标值,为此,我根据问题逻辑在目标序列中选择了最后一个值。现在看起来很容易,但那时非常棘手。其余发布的代码相同。
train_target = torch.tensor(train_data[['Label1','Label2','Label3']].iloc[3::4].values.astype(np.float32))
train_target = np.argmax(train_target, axis=1)
train = torch.tensor(train_data.drop(['Label1','Label2','Label3'], axis = 1).values.reshape(-1, 4, 16839).astype(np.float32))
train_tensor = TensorDataset(train, train_target)
train_loader = DataLoader(dataset = train_tensor, batch_size = batch_size, shuffle = True)
print(train.shape)
print(train_target.shape)
torch.Size([16628, 4, 16839])
torch.Size([16628])
答案 1 :(得分:0)
您已经设置了input_dim = 16839
,因此您的模型需要输入形状为(batch_size, seq_len, 16839)
的输入。
您要用来绘制批次的train_tensor
的形状为(66512, 1, 16839)
。因此,您的批次的形状为(batch_size, 1, 16839)
。之所以可行,是因为它满足了上述要求。
但是,如果您尝试重塑相同的训练张量以使seq_len
= 4,那么您的input_dim
维度将不再是16839,因此将不符合模型的期望值,这就是为什么您会收到尺寸不匹配错误。