Background. With the acceleration of industrial development, i.e. with the new "Industry 4.0", structuring and processing of acquired voluminous and heterogeneous data become considerably more complicated and represents an important scientificpractical problem. Cyber-PHY, IoT, sensor networks, robotics, multiple real-time applications can generate large arrays of unmanaged, weakly structured and non-configured data of various types, known as "Big Data". However, the problem of "Big Data" is very hard to solve nowadays. Objective. The purpose of the presented in this paper research is to analyze the sources of Big data, determine their main characteristics and suggest the ways for overcoming the growing dimension of Big data. Methods. In contradistinction to traditional ways of the Big data problem solving, when only dealing with certain empirical approaches and models, the paper proposes to introduce ontologies for describing data groups, use compression of data volumes into knowledge that significantly reduces their volumes and improves understanding of their sense. Results. The effectiveness of given solutions is confirmed by the best known practices and our own case studies aimed at overcoming this well-known complex problem. Conclusions. To overcome the problem of "Big Data" there is no single universal solution. The analysis shows that the solution can be found by introducing ontologies, determining the mutual influences and correlations between the data, thus gaining knowledge based on a huge amount of data.