2017
DOI: 10.1109/jstsp.2016.2637164
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Modules for Objective Visual Quality Assessment Focusing on Benchmarking and SLAs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…It is challenging to distinguish features representing impairments from features that are part of the source content, and this is why NR metrics in general are not capable of predicting subjective video quality as accurately as state-of-the-art FR metrics. It has been shown that different distortion specific metrics (e.g., blur, blockiness and jerkiness) do not work well alone for NR assessment of global video quality, but they can be combined into a more accurate generic metric by machine learning [40]- [42]. Unfortunately, video quality metrics based on machine learning tend to be prone to irreproducibility, due to overfitting and the fact that the commonly used machine learning algorithms do not include a mechanism to ensure that different features are combined in a consistent manner [42].…”
Section: Objective Assessment Of Packet Loss Artifactsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is challenging to distinguish features representing impairments from features that are part of the source content, and this is why NR metrics in general are not capable of predicting subjective video quality as accurately as state-of-the-art FR metrics. It has been shown that different distortion specific metrics (e.g., blur, blockiness and jerkiness) do not work well alone for NR assessment of global video quality, but they can be combined into a more accurate generic metric by machine learning [40]- [42]. Unfortunately, video quality metrics based on machine learning tend to be prone to irreproducibility, due to overfitting and the fact that the commonly used machine learning algorithms do not include a mechanism to ensure that different features are combined in a consistent manner [42].…”
Section: Objective Assessment Of Packet Loss Artifactsmentioning
confidence: 99%
“…It has been shown that different distortion specific metrics (e.g., blur, blockiness and jerkiness) do not work well alone for NR assessment of global video quality, but they can be combined into a more accurate generic metric by machine learning [40]- [42]. Unfortunately, video quality metrics based on machine learning tend to be prone to irreproducibility, due to overfitting and the fact that the commonly used machine learning algorithms do not include a mechanism to ensure that different features are combined in a consistent manner [42]. To make sure that a learning-based quality metric considers packet loss artifacts properly, the metric has to be trained using video sequences with similar artifacts.…”
Section: Objective Assessment Of Packet Loss Artifactsmentioning
confidence: 99%