How to get rid of checkerboard artifacts

How to get rid of checkerboard artifacts

gan checkerboard artifacts
deconvolution and checkerboard artifacts
distill checkerboard artifacts
resize convolution
checkerboard artifacts cnn
resize convolution pytorch
upsampling artifacts
deconvolution layer

Hello fellow coders,

I am using a fully convolutional autoencoder to color black and white images, however, the output has a checkerboard pattern and I want to get rid of it. The checkerboard artifacts I have seen so far allways have been far smaller than mine and the usual way to get rid of them is replacing all unpooling operations with bilinear upsampling (I have been told that).

But I can not simply replace the unpooling operation because I work with different sized images, thus the unpooling operation is needed, else the output tensor could have a different size than the original.

TLDR:

How can I get rid of these checkerboard-artifacts without replacing the unpooling operations?

class AE(nn.Module):
    def __init__(self):
        super(AE, self).__init__()
        self.leaky_reLU = nn.LeakyReLU(0.2)
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=1, return_indices=True)
        self.unpool = nn.MaxUnpool2d(kernel_size=2, stride=2, padding=1)
        self.softmax = nn.Softmax2d()

        self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.conv2 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.conv3 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.conv4 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.conv5 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=3, stride=1, padding=1)
        self.conv6 = nn.ConvTranspose2d(in_channels=1024, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.conv7 = nn.ConvTranspose2d(in_channels=512, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.conv8 = nn.ConvTranspose2d(in_channels=256, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.conv9 = nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.conv10 = nn.ConvTranspose2d(in_channels=64, out_channels=2, kernel_size=3, stride=1, padding=1)

    def forward(self, x):

        # encoder
        x = self.conv1(x)
        x = self.leaky_reLU(x)
        size1 = x.size()
        x, indices1 = self.pool(x)

        x = self.conv2(x)
        x = self.leaky_reLU(x)
        size2 = x.size()
        x, indices2 = self.pool(x)

        x = self.conv3(x)
        x = self.leaky_reLU(x)
        size3 = x.size()
        x, indices3 = self.pool(x)

        x = self.conv4(x)
        x = self.leaky_reLU(x)
        size4 = x.size()
        x, indices4 = self.pool(x)

        ######################
        x = self.conv5(x)
        x = self.leaky_reLU(x)

        x = self.conv6(x)
        x = self.leaky_reLU(x)
        ######################

        # decoder
        x = self.unpool(x, indices4, output_size=size4)
        x = self.conv7(x)
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices3, output_size=size3)
        x = self.conv8(x)
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices2, output_size=size2)
        x = self.conv9(x)
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices1, output_size=size1)
        x = self.conv10(x)
        x = self.softmax(x)

        return x

.

EDIT - Solution:

Skip-Connections are the way to go!

class AE(nn.Module):
    def __init__(self):
        super(AE, self).__init__()
        self.leaky_reLU = nn.LeakyReLU(0.2)
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=1, return_indices=True)
        self.unpool = nn.MaxUnpool2d(kernel_size=2, stride=2, padding=1)
        self.softmax = nn.Softmax2d()

        self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.conv2 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.conv3 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.conv4 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.conv5 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=3, stride=1, padding=1)
        self.conv6 = nn.Conv2d(in_channels=1024, out_channels=512, kernel_size=3, stride=1, padding=1)
        self.conv7 = nn.Conv2d(in_channels=1024, out_channels=256, kernel_size=3, stride=1, padding=1)
        self.conv8 = nn.Conv2d(in_channels=512, out_channels=128, kernel_size=3, stride=1, padding=1)
        self.conv9 = nn.Conv2d(in_channels=256, out_channels=64, kernel_size=3, stride=1, padding=1)
        self.conv10 = nn.Conv2d(in_channels=128, out_channels=2, kernel_size=3, stride=1, padding=1)

    def forward(self, x):

        # encoder
        x = self.conv1(x)
        out1 = self.leaky_reLU(x)
        x = out1
        size1 = x.size()
        x, indices1 = self.pool(x)

        x = self.conv2(x)
        out2 = self.leaky_reLU(x)
        x = out2
        size2 = x.size()
        x, indices2 = self.pool(x)

        x = self.conv3(x)
        out3 = self.leaky_reLU(x)
        x = out3
        size3 = x.size()
        x, indices3 = self.pool(x)

        x = self.conv4(x)
        out4 = self.leaky_reLU(x)
        x = out4
        size4 = x.size()
        x, indices4 = self.pool(x)

        ######################
        x = self.conv5(x)
        x = self.leaky_reLU(x)

        x = self.conv6(x)
        x = self.leaky_reLU(x)
        ######################

        # decoder
        x = self.unpool(x, indices4, output_size=size4)
        x = self.conv7(torch.cat((x, out4), 1))
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices3, output_size=size3)
        x = self.conv8(torch.cat((x, out3), 1))
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices2, output_size=size2)
        x = self.conv9(torch.cat((x, out2), 1))
        x = self.leaky_reLU(x)

        x = self.unpool(x, indices1, output_size=size1)
        x = self.conv10(torch.cat((x, out1), 1))
        x = self.softmax(x)

        return x

Skip connection is commonly used in Encoder-Decoder architecture and it helps to produce accurate result by passing appearance information from shallow layer of encoder (discriminator) to corresponding deeper layer of decoder (generator). Unet is the widely used Encoder-Decoder type architecture. Linknet is also very popular and it differs with Unet in the way of fusing appearance information of encoder layer with the decoder layer. In case of Unet, incoming features (from encoder) are concatenated in the corresponding decoder layer. On the other hand, Linknet performs addition and that why Linknet requires fewer number of operations in a single forward pass and significantly faster than the Unet.

Your each convolution block in Decoder might looks like following:

Additionally, i'm attaching a figure bellow depicting architecture of Unet and LinkNet. Hope using skip connection will help.

How to get rid of checkerboard artifacts, Skip connection is commonly used in Encoder-Decoder architecture and it helps to produce accurate result by passing appearance information� This distill paper talks about one of the causes of the checkerboard artifacts. You can fix that issue by switching from "deconvolution" to nearest-neighbor upsampling followed by regular convolution. I think @SsnL may have implemented this at some point. We've also noticed that sometimes the checkboard artifacts go away if you simply train long enough.


Instead of using an upconv layer such as nn.ConvTranspose2d, you can use interpolation in the decoder part to go back to your initial format, such as torch.nn.functional.interpolate. It will prevent you from having checkerboards artifacts.

If you want learnable weights in the decoder, you should also use a conv layer such as nn.Conv2d after each interpolation.

any suggestion on how to get rid of the checkered effect � Issue #190 , It remove the checkerboard artifacts during early training. You may find it useful. nn.Upsample(scale_factor = 2, mode='bilinear'), nn. I am working with a stock eps file - artwork on a transparent background. I have not been able to get rid of the checkerboard effect, even after selecting 'none' in the Transparency and Gamut preferences. It doesn't disappear either when I flatten the image and save it in other formats. I'm using the latest PS version on the latest Mac Sierra OS.


This pattern you have because of deconvolution (nn.ConvTranspose2d). The article explains it in detail.

You may try Upsample as alternative. This will not provide the checkerboard pattern.

Works like this:

import torch 
input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)
m = torch.nn.Upsample(scale_factor=2, mode='nearest')
m(input)

However you will not be able to learn anything with Upsample. It is just a transform. So it is a trade. There are many papers online how to deal with the checkerboard pattern for different problems.

The idea is to train your network so the checkerboard pattern is gone.

[PDF] Checkerboard artifact free sub-pixel convolution, deconvolution layer can be described and implemented in various ways. the resize convolution, while also removing checkerboard artifacts after initialization. The grey checkerboard pattern in Photoshop is just there to show where areas of transparency are. It is not really there. Just to confirm that what you are referring to is the transparency grid go to Photoshop's Preferences and in the Transparency & Gamut section set the Grid size to "None".


Checkerboard artifacts free convolutional neural networks, strated that checkerboard artifacts can be perfectly avoided by using resize operations of deconvolution layers are carried out on the backward pass to� Not all artifacts are due to insufficient power. In your case the card likely had a soldering defect and a key connection was lost. Heat is the biggest enemy of electrical components and it wears them out causing premature failures.


Deconvolution and Checkerboard Artifacts, For instance, a commonly sought out artifact is the checkerboard pattern that is GANs have been shown to produce checkerboard artifacts [34] depending on� Everything looks pretty good, you just need to figure out how to replace the background. To begin with, let’s find and select the layer that contains the checkered background. Open the “Layers”


(PDF) Checkerboard artifacts free convolutional neural networks, strated that checkerboard artifacts can be perfectly avoided. by using operations of deconvolution layers are carried out on the. backward� Antiscatter grids are routinely used to prevent a degradation in image quality caused by scattered x-ray beams. These grids are made up of x-ray transmitting material, like aluminum strips, and x-ray absorbing material, like lead strips. If the f