Visual Quality Assessment and Blur Detection Based on the Transform of Gradient Magnitudes
Document
Description
Digital imaging and image processing technologies have revolutionized the way in which
we capture, store, receive, view, utilize, and share images. In image-based applications,
through different processing stages (e.g., acquisition, compression, and transmission), images
are subjected to different types of distortions which degrade their visual quality. Image
Quality Assessment (IQA) attempts to use computational models to automatically evaluate
and estimate the image quality in accordance with subjective evaluations. Moreover, with
the fast development of computer vision techniques, it is important in practice to extract
and understand the information contained in blurred images or regions.
The work in this dissertation focuses on reduced-reference visual quality assessment of
images and textures, as well as perceptual-based spatially-varying blur detection.
A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The
proposed method requires a very small number of reduced-reference (RR) features. Extensive
experiments performed on different benchmark databases demonstrate that the proposed
RRIQA method, delivers highly competitive performance as compared with the
state-of-the-art RRIQA models for both natural and texture images.
In the context of texture, the effect of texture granularity on the quality of synthesized
textures is studied. Moreover, two RR objective visual quality assessment methods that
quantify the perceived quality of synthesized textures are proposed. Performance evaluations
on two synthesized texture databases demonstrate that the proposed RR metrics outperforms
full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in
predicting the perceived visual quality of the synthesized textures.
Last but not least, an effective approach to address the spatially-varying blur detection
problem from a single image without requiring any knowledge about the blur type, level,
or camera settings is proposed. The evaluations of the proposed approach on a diverse
sets of blurry images with different blur types, levels, and content demonstrate that the
proposed algorithm performs favorably against the state-of-the-art methods qualitatively
and quantitatively.
we capture, store, receive, view, utilize, and share images. In image-based applications,
through different processing stages (e.g., acquisition, compression, and transmission), images
are subjected to different types of distortions which degrade their visual quality. Image
Quality Assessment (IQA) attempts to use computational models to automatically evaluate
and estimate the image quality in accordance with subjective evaluations. Moreover, with
the fast development of computer vision techniques, it is important in practice to extract
and understand the information contained in blurred images or regions.
The work in this dissertation focuses on reduced-reference visual quality assessment of
images and textures, as well as perceptual-based spatially-varying blur detection.
A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The
proposed method requires a very small number of reduced-reference (RR) features. Extensive
experiments performed on different benchmark databases demonstrate that the proposed
RRIQA method, delivers highly competitive performance as compared with the
state-of-the-art RRIQA models for both natural and texture images.
In the context of texture, the effect of texture granularity on the quality of synthesized
textures is studied. Moreover, two RR objective visual quality assessment methods that
quantify the perceived quality of synthesized textures are proposed. Performance evaluations
on two synthesized texture databases demonstrate that the proposed RR metrics outperforms
full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in
predicting the perceived visual quality of the synthesized textures.
Last but not least, an effective approach to address the spatially-varying blur detection
problem from a single image without requiring any knowledge about the blur type, level,
or camera settings is proposed. The evaluations of the proposed approach on a diverse
sets of blurry images with different blur types, levels, and content demonstrate that the
proposed algorithm performs favorably against the state-of-the-art methods qualitatively
and quantitatively.