当前位置: 首页 > 知识库问答 >
问题:

我定义了一个损失函数,但是向后呈现错误给我,有人能告诉我如何修复它吗

钱和平
2023-03-14
class loss(Function):
    @staticmethod
    def forward(ctx,x,INPUT):

        batch_size = x.shape[0]
        X = x.detach().numpy()
        input = INPUT.detach().numpy()
        Loss = 0
        for i in range(batch_size):
            t_R_r = input[i,0:4]
            R_r = t_R_r[np.newaxis,:]
            t_R_i = input[i,4:8]
            R_i = t_R_i[np.newaxis,:]
            t_H_r = input[i,8:12]
            H_r = t_H_r[np.newaxis,:]
            t_H_i = input[i,12:16]
            H_i = t_H_i[np.newaxis,:]

            t_T_r = input[i, 16:32]
            T_r = t_T_r.reshape(4,4)
            t_T_i = input[i, 32:48]
            T_i = t_T_i.reshape(4,4)

            R = np.concatenate((R_r, R_i), axis=1)
            H = np.concatenate((H_r, H_i), axis=1)


            temp_t1 = np.concatenate((T_r,T_i),axis=1)
            temp_t2 = np.concatenate((-T_i,T_r),axis=1)
            T = np.concatenate((temp_t1,temp_t2),axis=0)
            phi_r = np.zeros((4,4))
            row, col = np.diag_indices(4)
            phi_r[row,col] = X[i,0:4]
            phi_i = np.zeros((4, 4))
            row, col = np.diag_indices(4)
            phi_i[row, col] = 1 - np.power(X[i, 0:4],2)

            temp_phi1 = np.concatenate((phi_r,phi_i),axis=1)
            temp_phi2 = np.concatenate((-phi_i, phi_r), axis=1)
            phi = np.concatenate((temp_phi1,temp_phi2),axis=0)

            temp1 = np.matmul(R,phi)

            temp2 = np.matmul(temp1,T)  # error
            H_hat = H + temp2

            t_Q_r = np.zeros((4,4))
            t_Q_r[np.triu_indices(4,1)] = X[i,4:10]
            Q_r = t_Q_r + t_Q_r.T
            row,col = np.diag_indices(4)
            Q_r[row,col] = X[i,10:14]
            Q_i = np.zeros((4,4))
            Q_i[np.triu_indices(4,1)] = X[i,14:20]
            Q_i = Q_i - Q_i.T

            temp_Q1 = np.concatenate((Q_r,Q_i),axis=1)
            temp_Q2 = np.concatenate((-Q_i,Q_r),axis=1)
            Q = np.concatenate((temp_Q1,temp_Q2),axis=0)

            t_H_hat_r = H_hat[0,0:4]
            H_hat_r = t_H_hat_r[np.newaxis,:]
            t_H_hat_i= H_hat[0,4:8]
            H_hat_i = t_H_hat_i[np.newaxis,:]

            temp_H1 = np.concatenate((-H_hat_i.T,H_hat_r.T),axis=0)
            H_hat_H = np.concatenate((H_hat.T,temp_H1),axis=1)
            temp_result1 = np.matmul(H_hat,Q)
            temp_result2 = np.matmul(temp_result1,H_hat_H)

            Loss += np.log10(1+temp_result2[0][0])
        Loss = t.from_numpy(np.array(Loss / batch_size))
        return Loss
    @staticmethod
    def backward(ctx,grad_output):
        print('gradient')
        return grad_output
def criterion(output,input):
    return loss.apply(output,input)

这是我的损失函数。但它目前的错误:Traceback(最近的调用最后):

文件"/用户/mr芳/channel_capacity/training.py",第24行,在loss.backward()文件"/用户/mr芳/anaconda3/lib/python3.6/site-包/torch/tensor.py",第150行,在向后torch.autograd.backward(自,渐变,retain_graph,create_graph)File"/用户/mrfons/anaconda3/lib/python3.6/site-包/torch/Autograd/init.py",第99行,以向后allow_unreachable=True)#allow_unreachable标志运行时错误:函数lossBackward返回不正确的渐变数(预期为2,得到1)

我怎么能修好它呢。非常感谢

共有1个答案

严天逸
2023-03-14

你的向前(ctx,x,INPUT)需要两个输入,xINPUT,因此向后也应该输出两个梯度,grad\u xgrad\u INPUT

此外,在您的代码片段中,您并没有真正计算自定义渐变,因此您可以使用Pytorch的autograd来计算,而无需定义特殊的函数。

如果这是工作代码,并且您要定义自定义损失,那么这里有一个快速样板,说明向后应该包括哪些内容:

@staticmethod
def forward(ctx, x, INPUT):
    # this is required so they're available during the backwards call
    ctx.save_for_backward(x, INPUT)

    # custom forward

@staticmethod
def backward(ctx, grad_output):
    x, INPUT = ctx.saved_tensors
    grad_x = grad_INPUT = None

    # compute grad here

    return grad_x, grad_INPUT

您不需要为不需要的输入返回渐变,因此可以为它们返回None

更多信息在这里和这里。

 类似资料: