pytorch实现focal loss的两种方式小结
我就废话不多说了,直接上代码吧!
importtorch importtorch.nn.functionalasF importnumpyasnp fromtorch.autogradimportVariable ''' pytorch实现focalloss的两种方式(现在讨论的是基于分割任务) 在计算损失函数的过程中考虑到类别不平衡的问题,假设加上背景类别共有6个类别 ''' defcompute_class_weights(histogram): classWeights=np.ones(6,dtype=np.float32) normHist=histogram/np.sum(histogram) foriinrange(6): classWeights[i]=1/(np.log(1.10+normHist[i])) returnclassWeights deffocal_loss_my(input,target): ''' :paraminput:shape[batch_size,num_classes,H,W]仅仅经过卷积操作后的输出,并没有经过任何激活函数的作用 :paramtarget:shape[batch_size,H,W] :return: ''' n,c,h,w=input.size() target=target.long() input=input.transpose(1,2).transpose(2,3).contiguous().view(-1,c) target=target.contiguous().view(-1) number_0=torch.sum(target==0).item() number_1=torch.sum(target==1).item() number_2=torch.sum(target==2).item() number_3=torch.sum(target==3).item() number_4=torch.sum(target==4).item() number_5=torch.sum(target==5).item() frequency=torch.tensor((number_0,number_1,number_2,number_3,number_4,number_5),dtype=torch.float32) frequency=frequency.numpy() classWeights=compute_class_weights(frequency) ''' 根据当前给出的groundtruthlabel计算出每个类别所占据的权重 ''' #weights=torch.from_numpy(classWeights).float().cuda() weights=torch.from_numpy(classWeights).float() focal_frequency=F.nll_loss(F.softmax(input,dim=1),target,reduction='none') ''' 上面一篇博文讲过 F.nll_loss(torch.log(F.softmax(inputs,dim=1),target)的函数功能与F.cross_entropy相同 可见F.nll_loss中实现了对于target的one-hotencoding编码功能,将其编码成与inputshape相同的tensor 然后与前面那一项(即F.nll_loss输入的第一项)进行element-wiseproduction 相当于取出了log(p_gt)即当前样本点被分类为正确类别的概率 现在去掉取log的操作,相当于focal_frequencyshape[num_samples] 即取出groundtruth类别的概率数值,并取了负号 ''' focal_frequency+=1.0#shape[num_samples]1-P(gt_classes) focal_frequency=torch.pow(focal_frequency,2)#torch.Size([75]) focal_frequency=focal_frequency.repeat(c,1) ''' 进行repeat操作后,focal_frequencyshape[num_classes,num_samples] ''' focal_frequency=focal_frequency.transpose(1,0) loss=F.nll_loss(focal_frequency*(torch.log(F.softmax(input,dim=1))),target,weight=None, reduction='elementwise_mean') returnloss deffocal_loss_zhihu(input,target): ''' :paraminput:使用知乎上面大神给出的方案https://zhuanlan.zhihu.com/p/28527749 :paramtarget: :return: ''' n,c,h,w=input.size() target=target.long() inputs=input.transpose(1,2).transpose(2,3).contiguous().view(-1,c) target=target.contiguous().view(-1) N=inputs.size(0) C=inputs.size(1) number_0=torch.sum(target==0).item() number_1=torch.sum(target==1).item() number_2=torch.sum(target==2).item() number_3=torch.sum(target==3).item() number_4=torch.sum(target==4).item() number_5=torch.sum(target==5).item() frequency=torch.tensor((number_0,number_1,number_2,number_3,number_4,number_5),dtype=torch.float32) frequency=frequency.numpy() classWeights=compute_class_weights(frequency) weights=torch.from_numpy(classWeights).float() weights=weights[target.view(-1)]#这行代码非常重要 gamma=2 P=F.softmax(inputs,dim=1)#shape[num_samples,num_classes] class_mask=inputs.data.new(N,C).fill_(0) class_mask=Variable(class_mask) ids=target.view(-1,1) class_mask.scatter_(1,ids.data,1.)#shape[num_samples,num_classes]one-hotencoding probs=(P*class_mask).sum(1).view(-1,1)#shape[num_samples,] log_p=probs.log() print('incalculatingbatch_loss',weights.shape,probs.shape,log_p.shape) #batch_loss=-weights*(torch.pow((1-probs),gamma))*log_p batch_loss=-(torch.pow((1-probs),gamma))*log_p print(batch_loss.shape) loss=batch_loss.mean() returnloss if__name__=='__main__': pred=torch.rand((2,6,5,5)) y=torch.from_numpy(np.random.randint(0,6,(2,5,5))) loss1=focal_loss_my(pred,y) loss2=focal_loss_zhihu(pred,y) print('loss1',loss1) print('loss2',loss2) ''' incalculatingbatch_losstorch.Size([50])torch.Size([50,1])torch.Size([50,1]) torch.Size([50,1]) loss1tensor(1.3166) loss2tensor(1.3166) '''
以上这篇pytorch实现focalloss的两种方式小结就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持毛票票。
声明:本文内容来源于网络,版权归原作者所有,内容由互联网用户自发贡献自行上传,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任。如果您发现有涉嫌版权的内容,欢迎发送邮件至:czq8825#qq.com(发邮件时,请将#更换为@)进行举报,并提供相关证据,一经查实,本站将立刻删除涉嫌侵权内容。