特别注意:
``torch.nn`` only supports mini-batches. The entire ``torch.nn``
package only supports inputs that are a mini-batch of samples, and not
a single sample.
For example, ``nn.Conv2d`` will take in a 4D Tensor of
``nSamples x nChannels x Height x Width``.
If you have a single sample, just use ``input.unsqueeze(0)`` to add
a fake batch dimension.
Before proceeding further, let's recap all the classes you’ve seen so far.
**回顾一下:**
- ``torch.Tensor`` - A *multi-dimensional array* with support for autograd
operations like ``backward()``. Also *holds the gradient* w.r.t. the
tensor.
- ``nn.Module`` - Neural network module. *提供了一个便捷的方式实现模型参数的封装,CPU与GPU之间的转移,模型的加载和导出等辅助性功能, etc.
- ``nn.Parameter`` - A kind of Tensor, that is *automatically
registered as a parameter when assigned as an attribute to a*
``Module``.
- ``autograd.Function`` - 实现自动梯度操作的前向和后向定义。 每个 ``Tensor`` 操作, 至少创建一个 ``Function`` node, that connects to functions that created a ``Tensor`` and *encodes its history*.
**目前,我们已经学会:**
- 定义一个神经网络
- 处理前向推断以及调用backward进行反向传播
**剩余部分:**
- 计算损失
- 更新网络权重
Loss Function
-------------
一个损失函数以(output, target)为输入,计算一个用于评估模型实际输出与目标期望输出的不一致的值(value)
PyTorch的``torch.nn``package提供了各种不同的损失函数:`loss functions