Getting Crisper Images with Manual Resizing

Annmay Sharma
3 min readJun 7, 2021

When training vision models, it is common to resize images to a lower dimension ((224 x 224), (299 x 299), etc.) to allow mini-batch learning and also to keep up the compute limitations. It improves efficiency and typically leads to better results.

Another idea which we will explore today is that rescaling can be used to reduce the blur and increase the quality of an image. This is because when we downscale an image, we not only reduce the blur but also lose small details (since the pixel size is the same, anything that becomes smaller than one pixel will vanish from view, being absorbed in the grey value of that pixel). So, we are not just rescaling but also averaging over the smallest details.

Ideally, we would like this to be a lossless process where in we do not lose any of the data. However, while using the NumPy function, there is significant information loss as we are manually scaling the image.

Manual Scaling

Let us first understand how we manually scale the Image:

We treat the images as NumPy arrays and hence, we can manipulate the rows and columns of the arrays. In order to upscale the image, we simply replicate the rows or columns whereas to downscale the image, we merge them.

Let’s try to improve the image quality by resizing!

  1. Import all the necessary libraries
import numpy as np
import cv2
from matplotlib import pyplot as plt

2. Read the image using cv2 (with the cv2.IMREAD_UNCHANGED flag which is -1)

img = cv2.imread("cube.png", -1);

3. We will now downscale the width of the image and store the image with half the width. This is done by omitting half of the columns which will merge the columns.

height, width, channels = img.shape;resized_img_width = np.zeros((height, width//2, channels), dtype=np.int32);for r in range(height):for c in range(width//2):resized_img_width[r][c] += (img[r][2*c]);

We store the image with half the width of the original image in resized_img_width

4. Similarly lets downscale the height of the image to get an image downscaled to exactly half the dimensions of the original image.

resized_img = np.zeros((height//2, width//2, channels), dtype=np.int32);for r in range(height//2):for c in range(width//2):resized_img[r][c] += (resized_img_width[r*2][c]);

5. Now let us upscale the images by first upscaling the height. This is done by replicating every consecutive row so that we get an image that is two times the height of the downscaled image (or the original height of the image)

half_upsclaled_img = np.zeros((height, width//2, channels), dtype=np.int32);half_upsclaled_img[0:height:2, :, :] = resized_img[:, :, :];half_upsclaled_img[1:height:2, :, :] = resized_img[:, :, :];

6. Similarly, we can upscale the width by replicating every consecutive column and return the image to its original height.

upsclaled_img = np.zeros((height, width, channels), dtype=np.int32);upsclaled_img[:, 0:width:2, :] = half_upsclaled_img[:, :, :];upsclaled_img[:, 1:width:2, :] = half_upsclaled_img[:, :, :];

7. Compare the two images using matplotlib.pyplot

f = plt.figure(figsize=(15,15))f.add_subplot(1, 2, 1).set_title(‘Original Image’);plt.imshow(img[:, :, ::-1])f.add_subplot(1, 2, 2).set_title(‘Upscaled image post downscaling’);plt.imshow(upsclaled_img[:, :, ::-1])plt.show()

Voila! we get a more crisper image as we have reduced the smaller details.

Beware this type of resizing can lead to a lot of information loss as we omit half of the pixels while resizing.

--

--