吉林省白山市建设厅网站首页,企业邮箱app,免费的云服务器哪家好,在门户网站做产品seo注#xff1a;书中对代码的讲解并不详细#xff0c;本文对很多细节做了详细注释。另外#xff0c;书上的源代码是在Jupyter Notebook上运行的#xff0c;较为分散#xff0c;本文将代码集中起来#xff0c;并加以完善#xff0c;全部用vscode在python 3.9.18下测试通过书中对代码的讲解并不详细本文对很多细节做了详细注释。另外书上的源代码是在Jupyter Notebook上运行的较为分散本文将代码集中起来并加以完善全部用vscode在python 3.9.18下测试通过同时对于书上部分章节也做了整合。
Chapter7 Modern Convolutional Neural Networks
7.2 Network Using Blocks: VGG import matplotlib.pyplot as plt
import torch
from torch import nn
from d2l import torch as d2ldef vgg_block(num_convs, in_channels, out_channels):layers []for _ in range(num_convs):layers.append(nn.Conv2d(in_channels, out_channels,kernel_size3, padding1))layers.append(nn.ReLU())in_channels out_channelslayers.append(nn.MaxPool2d(kernel_size2,stride2))return nn.Sequential(*layers)#可变参数#指定每个vgg块的卷积层个数和输出通道个数
conv_arch ((1, 64), (1, 128), (2, 256), (2, 512), (2, 512))def vgg(conv_arch):conv_blks []in_channels 1# 卷积层部分for (num_convs, out_channels) in conv_arch:conv_blks.append(vgg_block(num_convs, in_channels, out_channels))in_channels out_channelsreturn nn.Sequential(*conv_blks, nn.Flatten(),# 全连接层部分nn.Linear(out_channels * 7 * 7, 4096), nn.ReLU(), nn.Dropout(0.5),#the spatial dimensions of the input tensor after the convolutional blocks are reduced to 7x7nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(0.5),nn.Linear(4096, 10))net vgg(conv_arch)
X torch.randn(size(1, 1, 224, 224))
for blk in net:X blk(X)print(blk.__class__.__name__,output shape:\t,X.shape)ratio 4
small_conv_arch [(pair[0], pair[1] // ratio) for pair in conv_arch]#由于VGG-11比AlexNet计算量更大因此构建一个通道数较少的网络
net vgg(small_conv_arch)#训练
lr, num_epochs, batch_size 0.05, 10, 128
train_iter, test_iter d2l.load_data_fashion_mnist(batch_size, resize224)
d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
plt.show()训练结果:
参考文献:VGG原始论文