pytorch实现中的第一次迭代后,损失函数为零

时间:2019-04-20 18:25:02

标签: deep-learning pytorch loss-function

这段代码将图像矢量和单词嵌入作为输入。当我尝试训练它时,第一次迭代后损失函数为零。


for epi in range(res_epi,res_epi+max_epoch+1,1):
    h=0
    for i in range(max_batch):
      t=h+batch_size
      #-----------------------loading: img, sent, label
      image_path=img_list[h]
      img_s=img_roi.item().get(image_path)

      cur_cap=a.get(image_path)

      pos_feat=np.array(cur_cap)

      maxlen=len(pos_feat)
      img_f=img_s
      img_f=np.repeat(img_s[np.newaxis,:,:],maxlen,axis=0)

      image_pathh=img_list[h+100]
      neg_feat=a.get(image_pathh)
      neg_feat=np.array(neg_feat)

      #------------------to torch


      img_f= Variable(torch.from_numpy(img_f)).cuda().float()

      sent_f_p=Variable(torch.from_numpy(pos_feat)).cuda().float()

      sent_f_n=Variable(torch.from_numpy(neg_feat)).cuda().float()
      #------------------tonetwork

      optimizer.zero_grad() 
      img_emb = model(img_f)
      img_emb=img_emb


      loss = criterion(img_emb,sent_f_p, sent_f_n)

      loss.backward()
      optimizer.step()
      if i % log_itr == 0:
        print 'epoch: ' +str(epi)+' Batch_id %d \t  loss %.9f' %(i, loss.data)
    h=t
    print 'eposch ' +str(epi)+ ' done'
    if epi % log_epitr == 0:
     fname='/DATA/sharma.21/workdone/captask'+str(epi)+'_EXTF.pt'
     torch.save(model.state_dict(),fname)

由于我将单词嵌入作为输入的正负特征,是否存在任何错误?

0 个答案:

没有答案