Abstract-Inference and Estimation in Missing Information (MI) scenarios are important topics in Statistical Learning Theory and Machine Learning (ML). In ML literature, attempts have been made to enhance prediction through precise feature selection methods. In sparse linear models, LASSO is well-known in extracting the desired support of the signal and resisting against noisy systems. When sparse models are also suffering from MI, the sparse recovery and inference of the missing models are taken into account simultaneously. In this paper, we will introduce an approach which enjoys sparse regression and covariance matrix estimation to improve matrix completion accuracy, and as a result enhancing feature selection preciseness which leads to reduction in prediction Mean Squared Error (MSE). We will compare the effect of employing covariance matrix in enhancing estimation accuracy to the case it is not used in feature selection. Simulations show the improvement in the performance as compared to the case where the covariance matrix estimation is not used.
I. INTORDUCTIONRecently, inference and learning in problems which are suffering from incomplete datasets have gained specific attention since these types of problems are accompanied with important applications in practical settings. In [1], practical settings are introduced where dealing with missing information and developing a statistical model which could learn the incomplete data are necessary. It is intuitively comprehensible that learning procedures would differ in missing data scenarios. We illustrate a novel example which could clarify what dealing with missing data means. Consider an image has missing information or lossy segments, and as a result, many pixels are lost or corrupted. In this setting, knowing the fact that the image is low-rank (few colors are used in painting for instance) helps restoration. As another illustration, suppose we have many athletes for whom specific records and measurements are gathered. The recorded features may have lots of features as not reported or not assigned (NA). Our purpose may be deriving the statistical model which Moradipari, Shahsavari, and determines the weight of each feature affecting a specific criterion such as athlete's stamina. Thus, we have a learning problem in a missing information scenario. Many works in the literature have been carried out towards the matrix completion problem including but not limited to [2], and [3]. Now, we proceed to add another aspect into the problem. In many settings, we are actually solving Compressed Sensing (CS) problems [4], where there is some sparsity pattern in the problem model. We briefly review how CS became popular in the literature. In many problems, there exists some sparsity pattern which leads to finding a unique solution for underdetermined settings provided there are sparsity constraints. CS problems have many well-known methods developed in the literature which could be classified in three main classes. One class is related to the soft-thresholding methods. ...
Abstract-In this paper, we will provide a comparison between uniform and random sampling for speech and music signals. There are various sampling and recovery methods for audio signals. Here, we only investigate uniform and random schemes for sampling and basic low-pass filtering and iterative method with adaptive thresholding for recovery. The simulation results indicate that uniform sampling with cubic spline interpolation outperforms other sampling and recovery methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.