Big Data is a large data index that increases in volume or size with very high velocity and whose variability or complexness is multidimensional. Capturing, managing, and processing or analyzing big data is really a complicated job. At present, big data analysis has reached a new stage. This new stage is known as rapid data where an enormous volume of data in gigabytes congregates with a targeted big data structure efficiently. The present structures of big data accumulate characteristically complex streams of data with respect to the big data offering to 6Vs such as velocity, variability, volume, veracity, value, and variety. After big data processing, a resultant database to be more helpful as compared to noisy, redundant, incompatible, and raw data. Another logical aspect of reducing big data is that it normally contains a large number of variables that makes it tough to discover various patterns as per the requirements. This research paper represents a comprehensive review of diverse methods that are applied for the process of big data reduction and conjointly presents a comprehensive discussion on big data dimension reduction processes, redundancy elimination, automatic learning process, data extraction, size or volume reduction, and big data compression.