当前位置: 首页 > 工具软件 > BCE > 使用案例 >

BCEloss详解

程鸿煊
2023-12-01

BCEloss详解

计算公式

l o s s ( o , t ) = − 1 n ∑ i ( t [ i ] ∗ l o g ( o [ i ] ) + ( 1 − t [ i ] ) ∗ l o g ( 1 − o [ i ] ) ) loss(o,t) = -\frac{1}{n}\sum_i (t[i] * log(o[i]) + (1-t[i])*log(1-o[i])) loss(o,t)=n1i(t[i]log(o[i])+(1t[i])log(1o[i]))
t[i]为对应的真实标签值,o[i]为预测的概率。用在二分类任务中,所以标签为0或者1。

代码示例

from importlib.metadata import requires
import torch
from torch import autograd
# 假设预测值如下
input = autograd.Variable(torch.tensor([[1.9072, 1.1079, 1.4906], 
[-0.6584, -0.0512, 0.7608],
[-0.0614, 0.6583, 0.1095]]), requires_grad=True)
print(input)
print('-'*100)

from torch import nn
# 利用sigmod函数将所有的结果转换到0-1之间,如果不含负数可以不用转换
m = nn.Sigmoid()
print(m(input))
print('-'*100)

# 真实值标签
target = torch.FloatTensor([[0, 1, 1], [1, 1, 1], [0, 0, 0]])
print(target)
print('-'*100)

import math
# 0对应真实值标签,第一组为0,1,1,所以r11,r12,r13为0*,1*,1*。
r11 = 0 * math.log(0.8707) + (1-0) * math.log((1 - 0.8707))
r12 = 1 * math.log(0.7517) + (1-1) * math.log((1 - 0.7517))
r13 = 1 * math.log(0.8162) + (1-1) * math.log((1 - 0.8162))

r21 = 1 * math.log(0.3411) + (1-1) * math.log((1 - 0.3411))
r22 = 1 * math.log(0.4872) + (1-1) * math.log((1 - 0.4872))
r23 = 1 * math.log(0.6815) + (1-1) * math.log((1 - 0.6815))

r31 = 0 * math.log(0.4847) + (1-0) * math.log((1 - 0.4847))
r32 = 0 * math.log(0.6589) + (1-0) * math.log((1 - 0.6589))
r33 = 0 * math.log(0.5273) + (1-0) * math.log((1 - 0.5273))

r1 = (r11 + r12 + r13) / 3
r2 = (r21 + r22 + r23) / 3
r3 = (r31 + r32 + r33) / 3
# 对数为负数所以加个负号变为正数
bceloss = -(r1 + r2 + r3) / 3
print(bceloss)
print('-'*100)

# 调用这个需要自己做sigmod变换
loss = nn.BCELoss()
print(loss(m(input), target))
print('-'*100)

# 调用这个人家已经写好了变换
loss = nn.BCEWithLogitsLoss()
print(loss(input, target))

输出

tensor([[ 1.9072,  1.1079,  1.4906],
        [-0.6584, -0.0512,  0.7608],
        [-0.0614,  0.6583,  0.1095]], requires_grad=True)
----------------------------------------------------------------------------------------------------
tensor([[0.8707, 0.7517, 0.8162],
        [0.3411, 0.4872, 0.6815],
        [0.4847, 0.6589, 0.5273]], grad_fn=<SigmoidBackward0>)
----------------------------------------------------------------------------------------------------
tensor([[0., 1., 1.],
        [1., 1., 1.],
        [0., 0., 0.]])
----------------------------------------------------------------------------------------------------
0.8000147727101611
----------------------------------------------------------------------------------------------------
tensor(0.8000, grad_fn=<BinaryCrossEntropyBackward0>)
----------------------------------------------------------------------------------------------------
tensor(0.8000, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>)

看到最后的loss都是0.8000,说明我们的计算方法正确。

 类似资料: