当前位置: 首页 > 工具软件 > Leaf > 使用案例 >

leaf 叶子(张量)

郑安晏
2023-12-01

在pytorch的tensor类中,有个is_leaf的属性,姑且把它作为叶子节点. is_leafFalse的时候,则不是叶子节点, is_leafTrue的时候为叶子节点(或者叶张量)

所以问题来了: leaf的作用是什么?为什么要加 leaf?
我们都知道tensor中的 requires_grad()属性,当requires_grad()为True时我们将会记录tensor的运算过程并为自动求导做准备,但是并不是每个requires_grad()设为True的值都会在backward的时候得到相应的grad,它还必须为leaf。这就说明: leaf成为了在 requires_grad()下判断是否需要保留 grad的前提条件

is_leaf()

  1. 按照惯例,所有requires_grad为False的张量(Tensor) 都为叶张量( leaf Tensor)
  2. requires_grad为True的张量(Tensor),如果他们是由用户创建的,则它们是叶张量(leaf Tensor).这意味着它们不是运算的结果,因此gra_fn为None
  3. 只有是叶张量的tensor在反向传播时才会将本身的grad传入的backward的运算中. 如果想得到当前tensor在反向传播时的grad, 可以用retain_grad()这个属性

例子:

>>> a = torch.rand(10, requires_grad=True)
>>> a.is_leaf
True
>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
# b was created by the operation that cast a cpu Tensor into a cuda Tensor
>>> c = torch.rand(10, requires_grad=True) + 2
>>> c.is_leaf
False
# c was created by the addition operation
>>> d = torch.rand(10).cuda()
>>> d.is_leaf
True
# d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
>>> e = torch.rand(10).cuda().requires_grad_()
>>> e.is_leaf
True
# e requires gradients and has no operations creating it
>>> f = torch.rand(10, requires_grad=True, device="cuda")
>>> f.is_leaf
True
# f requires grad, has no operation creating it
 类似资料: