Record study record life
构建一维BRDNet
构建一维BRDNet

构建一维BRDNet

由于需要增添对比实验,于是复现了BRDNet去噪网络,并且将其修改为一维信号去噪网络。

  • 网络架构

BRDNet网络架构,由两个不同的网络组成:上层网络和下层网络。上层网络仅由RL和BRN组成。较低层网络包括BRN,RL和膨胀卷积。

众所周知,接收域更大,所设计的网络将具有更高的计算成本。因此,我们选择一个网络(下层网络)来使用膨胀卷积。考虑到性能和效率之间的平衡,较低网络的2–8和10–15层使用膨胀卷积来捕获更多上下文信息。第一,第十六层使用BRN对数据进行归一化,这使两个子网的输出保持相同的分布。此外,BRN对于小批量任务非常有用,这对于低配置的硬件平台(例如GTX960和GTX970)非常有用。下一个,RL技术被融合到两个通道的网络中以提高性能。

具体代码如下:

import torch
import torch.nn as nn
from dncnns.model.torchsummary import summary

class BRDNet(nn.Module):
    def __init__(self, channels, num_of_layers=15):
        super(BRDNet, self).__init__()
        kernel_size = 3
        padding = 1
        features = 64
        groups = 1
        L = []
        layers = []
        layers.append(nn.Conv1d(in_channels=channels, out_channels=features, kernel_size=kernel_size, padding=padding,
                                bias=False))
        layers.append(nn.BatchNorm1d(features))
        layers.append(nn.ReLU(inplace=True))
        # layers.append(B.ResBlock(features, features, kernel_size=kernel_size, stride=1, padding=padding, bias=False, mode='CRC', negative_slope=0.2))
        for _ in range(15):
            layers.append(
                nn.Conv1d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=padding,
                          bias=False))
            layers.append(nn.BatchNorm1d(features))
            layers.append(nn.ReLU(inplace=True))
        # layers.append(B.ResBlock(features, features, kernel_size=kernel_size, stride=1, padding=padding, bias=False, mode='CRC', negative_slope=0.2))
        layers.append(nn.Conv1d(in_channels=features, out_channels=channels, kernel_size=kernel_size, padding=padding,
                                bias=False))  # 原来out_channels为channels


        L = []
        L.append(nn.Conv1d(in_channels=channels, out_channels=features, kernel_size=kernel_size, padding=padding,
                                bias=False))
        L.append(nn.BatchNorm1d(features))
        L.append(nn.ReLU(inplace=True))
        for i in range(7):
            L.append(nn.Conv1d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=2, groups=groups,
                      bias=False, dilation=2))
            L.append(nn.ReLU(inplace=True))
        L.append(nn.Conv1d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=padding,
                                bias=False))
        L.append(nn.BatchNorm1d(features))
        L.append(nn.ReLU(inplace=True))
        for i in range(6):
            L.append(nn.Conv1d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=2,
                               groups=groups,bias=False, dilation=2))
            L.append(nn.ReLU(inplace=True))
        L.append(nn.Conv1d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=padding,
                                bias=False))
        L.append(nn.BatchNorm1d(features))
        L.append(nn.ReLU(inplace=True))
        L.append(nn.Conv1d(in_channels=features, out_channels=channels, kernel_size=kernel_size, padding=padding,
                           bias=False))
        self.BRDNet_first = nn.Sequential(*layers)
        self.BRDNet_second = nn.Sequential(*L)
        self.conv1 = nn.Sequential(
            nn.Conv1d(in_channels=channels*2, out_channels=channels, kernel_size=kernel_size, padding=padding,
                      groups=groups, bias=False))
    def forward(self, x):
        out1 = self.BRDNet_first(x)
        out2 = self.BRDNet_second(x)
        out1 = x - out1
        out2 = x - out2
        out = torch.cat([out1, out2], 1)
        out = self.conv1(out)
        out = x - out
        return out
赞赏

微信赞赏 支付宝赞赏

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注