Image Quality Assessment
IQA methods, based on my own research and survey.
Method overview
Image quality assessment methods are mainly split into two categories:
- Reference: requires a pristine (ground truth) image and a distorted image to calculate the quality score.
- Reference-less or blind: focused on processes where there is no access to the pristine image.
The main goal of both approaches is to predict a quality score that correlates well with human perception. Currently, most new algorithms focus on feature learning, which takes a hybrid approach: automatically learning quality-aware features and associating such features to a perceived quality score.
Common algorithms
MSE and PSNR
Both MSE and PSNR rely on reference images, i.e, both these methods calculate the quality of a distorted image based on the difference of the perturbed image itself and the corresponding original reference clean image.
The most direct way to evaluate image quality is to compare the difference between the clean image and the distorted image, i.e, calculate the visibility of errors. For a clean image of size and a distorted image , the (Mean Squared Error) is calculated as:
Similarly, the (Peak Signal-to-Noise Ratio) for grayscale images are calculated as:
Where is the max possible pixel value of an image. (For instance: 255 for 8-bit images, for -bit images.) For colored images, we either ...
- ... calculate the for the 3 color channels and average them, or ...
- ... convert the image to YCbCr and calculate the Y channel only. (Which is the implementation of
skimage.metrics.peak_signal_noise_ratio()
.)
Structural similarity (SSIM)
Simple comparison based on the difference of two images doesn't comply with the results of the Humal Visual System. To make the evaluation result more human-focused, we use objective image quality assessment to determine the quality of an image. Structural similarity, SSIM, is the proposed method.
In SSIM, for samples and , we consider the following three factors:
- Luminance:
- Constrast:
- Structure:
Where, and are the mean of and respectively, and are the variance of and respectively, is the covariance of and . To avoid zero division, are constants where and by default.
With this, we can calculate SSIM with:
When are all 1, we get:
We can calculate SSIM with: skimage.metrics.structural_similarity()
.
References and articles
Referred in
- cw-algorithm
- This paradigm makes CW attack and its variants capable of being integrated with many other image quality metrics like the PSNR or the SSIM - image-quality-assessment.
- adversarial-texture-optimization
- We cannot use standard image quality metrics such as MSE, PSNR or SSIM - image-quality-assessment, as they assume perfect alignment between target and the ground truth.
- perceptual-similarity
- The paper argues that widely used image quality metrics like SSIM and PSNR mentioned in image-quality-assessment are simple and shallow functions that may fail to account for many nuances of human perception. The paper introduces a new dataset of human perceptual similarity judgments to systematically evaluate deep features across different architectures and tasks and compare them with classic metrics.
- robust-adversarial-perturbation
- (Peak Signal to Noise Ratio) is employed to evaluate the distortion of adversarial perturbation, because it is an approximation of human perception of image quality. (The higher the better - image-quality-assessment)
- readme