目录
tensor.unsqueeze(dim),在第dim维度上增加一个维度
nn.Linear: y=wx+b, Linear — PyTorch 1.9.1 documentation
reducation替代reduce、size_average
torch.ndim()、dim() | torch.Tensor.ndim — PyTorch 1.9.1 documentation | |
torch.eye(n_train) | 单位矩阵 | |
torch.rand()、torch.randn() | ||
torch.matmul() | 矩阵相乘的函数,参考:torch.matmul()用法介绍_杂文集-CSDN博客_torch.matmul | |
torch.sort() | ||
repeat_interleave( self: Tensor, repeats: _int, dim: Optional[_int]=None) | 指定维度repeat tensor。 self: 传入的数据为tensor repeats: 复制的份数 dim: 要复制的self的维度,可设定为0/1/2..... |
weights = torch.ones((2, 4)) * 0.1
print(weights.shape)
weights)
# -----------output-------------
torch.Size([2, 4])
tensor([[0.1000, 0.1000, 0.1000, 0.1000],
[0.1000, 0.1000, 0.1000, 0.1000]])
# return new tensor
weights_1 = weights.unsqueeze(1)
print(weights_1.shape)
print(weights_1)
# -------output----------
torch.Size([2, 1, 4])
tensor([[[0.1000, 0.1000, 0.1000, 0.1000]],
[[0.1000, 0.1000, 0.1000, 0.1000]]])
weights_2 = weights.unsqueeze(2)
print(weights_2.shape)
print(weights_2)
# ----------outout--------------
torch.Size([2, 4, 1])
tensor([[[0.1000],
[0.1000],
[0.1000],
[0.1000]],
[[0.1000],
[0.1000],
[0.1000],
[0.1000]]])
# dim为负数
weights_2 = weights.unsqueeze(-2) # 这里类似于unnsqueeze(1),结果为 torch.Size([2, 1, 4])
weights_2 = weights.unsqueeze(-1) # 这里类似于unnsqueeze(2),结果为 torch.Size([2, 4, 1])
很多loss的函数都有reduce和size_average这两个参数,一般loss function的计算都是按batch去计算,每个batch都会有自己的loss结果,这两个参数就是用来控制最终返回的是batch loss矩阵,还是这些loss的某种计算。
# loss_func = torch.nn.MSELoss(reduce=True, size_average=True)
loss_func = torch.nn.MSELoss(reduction='mean')
input_tensor = torch.from_numpy(np.array([[1, 2], [3, 4]]))
target_tensor = torch.from_numpy(np.array([[2, 3], [3, 5]]))
loss = loss_func(torch.autograd.Variable(input_tensor).float(),
torch.autograd.Variable(target_tensor).float())
print(loss)
output: tensor(0.7500)
如果
loss_func = torch.nn.MSELoss(reduction='none')
output:
tensor([[1., 1.],
[0., 1.]])