Pytorch vs. Keras:Pytorch模型严重过度

时间:2018-04-28 18:21:28

标签: python keras pytorch

现在好几天,我试图用pytorch复制我的keras训练结果。无论我做什么,pytorch模型都会在keras中过早地设置到更强的验证集。对于pytorch,我使用https://github.com/Cadene/pretrained-models.pytorch中的相同XCeption代码。

数据加载,扩充,验证,培训计划等等。我错过了一些明显的东西吗某处肯定存在普遍问题。我尝试了数千个不同的模块星座,但似乎没有任何东西接近keras训练。有人可以帮忙吗?

Keras模型:val精度> 90%

# base model
base_model = applications.Xception(weights='imagenet', include_top=False, input_shape=(img_width, img_height, 3))

# top model
x = base_model.output
x = GlobalMaxPooling2D()(x)
x = Dense(512, activation='relu')(x)
x = Dropout(0.5)(x)
predictions = Dense(4, activation='softmax')(x)

# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)

# Compile model
from keras import optimizers
adam = optimizers.Adam(lr=0.0001)
model.compile(loss='categorical_crossentropy', 
optimizer=adam, metrics=['accuracy'])

# LROnPlateau etc. with equivalent settings as pytorch

Pytorch模型:val精度~81%

from xception import xception
import torch.nn.functional as F

# modified from https://github.com/Cadene/pretrained-models.pytorch
class XCeption(nn.Module):
    def __init__(self, num_classes):
        super(XCeption, self).__init__()

        original_model = xception(pretrained="imagenet")

        self.features=nn.Sequential(*list(original_model.children())[:-1])
        self.last_linear = nn.Sequential(
             nn.Linear(original_model.last_linear.in_features, 512),
             nn.ReLU(),
             nn.Dropout(p=0.5),
             nn.Linear(512, num_classes)
        )

    def logits(self, features):
        x = F.relu(features)
        x = F.adaptive_max_pool2d(x, (1, 1))
        x = x.view(x.size(0), -1)
        x = self.last_linear(x)
        return x

    def forward(self, input):
        x = self.features(input)
        x = self.logits(x)
        return x 

device = torch.device("cuda")
model=XCeption(len(class_names))
if torch.cuda.device_count() > 1:
    print("Let's use", torch.cuda.device_count(), "GPUs!")
    # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
    model = nn.DataParallel(model)
model.to(device)

criterion = nn.CrossEntropyLoss(size_average=False)
optimizer = optim.Adam(model.parameters(), lr=0.0001)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)

非常感谢!

更新: 设定:

criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.2, patience=5, cooldown=5)

model = train_model(model, train_loader, val_loader, 
                        criterion, optimizer, scheduler, 
                        batch_size, trainmult=8, valmult=10, 
                        num_epochs=200, epochs_top=0)

清理培训功能:

def train_model(model, train_loader, val_loader, criterion, optimizer, scheduler, batch_size, trainmult=1, valmult=1, num_epochs=None, epochs_top=0):
  for epoch in range(num_epochs):                        
    for phase in ['train', 'val']:
        running_loss = 0.0
        running_acc = 0
        total = 0
        # Iterate over data.
        if phase=="train":
            model.train(True)  # Set model to training mode
            for i in range(trainmult):
                for data in train_loader:
                    # get the inputs
                    inputs, labels = data
                    inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
                    # zero the parameter gradients
                    optimizer.zero_grad()
                    # forward
                    outputs = model(inputs) # notinception
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels)
                    # backward + optimize only if in training phase
                    loss.backward()
                    optimizer.step()
                    # statistics                      
                    total += labels.size(0)
                    running_loss += loss.item()*labels.size(0)
                    running_acc += torch.sum(preds == labels)
                    train_loss=(running_loss/total)
                    train_acc=(running_acc.double()/total)
    else:
        model.train(False)  # Set model to evaluate mode
        with torch.no_grad():
            for i in range(valmult):
                for data in val_loader:
                    # get the inputs
                    inputs, labels = data
                    inputs, labels = inputs.to(torch.device("cuda")), labels.to(torch.device("cuda"))
                    # zero the parameter gradients
                    optimizer.zero_grad()
                    # forward
                    outputs = model(inputs)
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels.data)
                    # statistics
                    total += labels.size(0)
                    running_loss += loss.item()*labels.size(0)
                    running_acc += torch.sum(preds == labels)
                    val_loss=(running_loss/total)
                    val_acc=(running_acc.double()/total)  
        scheduler.step(val_loss)
return model

2 个答案:

答案 0 :(得分:2)

这可能是因为您使用的重量初始化类型 否则就不会发生 在两个模型中尝试使用相同的初始化程序

答案 1 :(得分:0)

self.features=nn.Sequential(*list(original_model.children())[:-1])

您确定此行以完全相同的方式重新实例化模型吗?您使用的是NN.Sequential,而不是原始XCeption模型的前向功能。如果该前向函数中有任何与使用nn.Sequential完全不同的东西,它将无法再现相同的性能。

您可以更改此顺序,而不是将其包装在顺序中

my_model = Xception()
# load weights before you change the architecture
my_model = load_weights(path_to_weights)
# overwrite the original's last_linear with your own
my_model.last_linear = nn.Sequential(
             nn.Linear(original_model.last_linear.in_features, 512),
             nn.ReLU(),
             nn.Dropout(p=0.5),
             nn.Linear(512, num_classes)
        )