ABSTRACT:
Objective assessment of multimedia quality using machine learning (ML) has been gaining popularity especially in the context of both traditional (e.g., terrestrial and satellite broadcast) and advance (such as over-the-top media services, IPTV) broadcast services. Being data-driven, these methods obviously rely on training to find the optimal model parameters. Therefore, to statistically compare and validate such ML-based quality predictors, the current approach randomly splits the given data into training and test sets and obtains a performance measure (for instance mean squared error, correlation coefficient etc.). The process is repeated a large number of times and parametric tests (e.g., t test) are then employed to statistically compare mean (or median) prediction accuracies. However, the current approach suffers from a few limitations (related to the qualitative aspects of training and testing data, the use of improper sample size for statistical testing, possibly dependent sample observations, and a lack of focus on quantifying the learning ability of the ML-based objective quality predictor) which have not been addressed in literature. Therefore, the main goal of this paper is to shed light on the said limitations both from practical and theoretical perspectives wherever applicable, and in the process propose an alternate approach to overcome some of them. As a major advantage, the proposed guidelines not only help in a theoretically more grounded statistical comparison but also provide useful insights into how well the ML-based objective quality predictors exploit data structure for learning. We demonstrate the added value of the proposed set of guidelines on standard datasets by comparing the performance of few existing ML-based quality estimators. A software implementation of the presented guidelines is also made publicly available to enable researchers and developers to test and compare different models in a repeatable manner.
EXISTING SYSTEM:
The use of ML for objective quality estimation is particularly suitable for broadcast applications where the quality of the received or transmitted content needs to be assessed objectively based on limited signal information. Not surprisingly, ML has been exploited in the past for the said purpose. Some existing techniques presented one of the first comprehensive methods for estimating quality of MPEG video streams, and are based on circular back propagation neural networks. A no reference method was presented, which is based on mapping frame-level features into a spatial quality score followed by temporal pooling. The method developed in is based on features extracted from the analysis of discrete cosine transform (DCT) coefficients of each decoded frame in a video sequence, and subsequent quality prediction using a neural network. Another ML based video quality estimator was presented in where symbolic regression-based frame work was trained on a set of features extracted from the received video bit stream. The ML based quality estimator proposed in works on the similar principle of analysing several features such as distinguishing the type of codec used (MPEG or H.264/AVC), DCT coefficients, estimation of the level of quantization used in the I-frames, etc. The next step is to apply support vector regression to predict video quality. The objective quality estimator proposed in was based on polynomial regression model, where the independent variables (or features) were based on spatial and temporal quantities derived from video spatiotemporal complexity, bitrate, and packet loss measurements. Existing works employed deep learning (deep belief networks) and bit stream specific features to predict quality objectively in a video transmission network. Deep learning has also been employed for quality measurement in live video streaming. Moreover, promising results from related disciplines such as computer vision and the availability of required hardware (e.g., GPU-accelerated computing) have opened up possibilities of developing efficient ML based implementations of quality predictors.
PROPOSED SYSTEM:
As far as the statistical comparison and validation of ML based quality predictors is concerned, the current approach is based on repeated and random splits of data (i.e., predictions from ML based methods and the corresponding subjective scores for the given multimedia content) into training and test sets. In each iteration, a performance measure (like mean squared error, correlation coefficient etc.) is obtained. Then, the means (or in some cases median) of such repeated performance measure for each ML based estimator are statistically compared via pairwise t. However, because of the requirement of training the current approach needs to be examined more closely in terms of the factors that can affect the validation process. These include qualitative aspects of training and testing data, determining the appropriate sample size when splitting the given data into training and test sets, the issue of possibly dependent sample observations and the analysis pertinent to the learning ability of the method (note that these issues are not relevant in case of statistical comparison of non-ML based predictors because there is no training involved and hence a question of train-test split typically does not arise). A survey of literature (e.g., refer to for some existing efforts in ML based quality estimation for video or for standardized recommendations) reveals that these important issues have not been thoroughly examined (either from theoretical or practical view points) although few works such as have considered the practical implications of the first issue regarding the qualitative aspects of training and testing data (also refer to some related works on statistical comparison of classifiers or analysis of their learning ability). Therefore, the main aim of the paper is to shed light on these factors, and in the process present a set of new guidelines to overcome the drawbacks of the current approach. The proposed guidelines offer the advantage of focusing on practical use-case scenario and quantifying the learning ability of the ML based quality estimator. Therefore, the use of these guidelines helps to make more informed conclusions and recommendations about metric performance. In contrast, the existing approach tends to treat ML based methods as black boxes and focuses primarily on global, binary decisions about metric performance. Software implementing the presented guidelines is also made publicly available, 1 in order to achieve the goal of reproducible research.
SYSTEM REQUIREMENTS
SOFTWARE REQUIREMENTS:
• Programming Language : Python
• Font End Technologies : TKInter/Web(HTML,CSS,JS)
• IDE : Jupyter/Spyder/VS Code
• Operating System : Windows 08/10
HARDWARE REQUIREMENTS:
Processor : Core I3
RAM Capacity : 2 GB
Hard Disk : 250 GB
Monitor : 15″ Color
Mouse : 2 or 3 Button Mouse
Key Board : Windows 08/10