Various methods of collecting, storing, and analyzing information should be used to create efficient data processing systems. To solve the problem of finding the necessary information in large data sets, machine learning algorithms are used. Given that most modern large-scale information systems use a huge number of computing devices, it is much more efficient to use distributed data processing technologies. In particular, distributed machine learning is widely used, in which devices are trained on local datasets and send only results to the global model. This approach improves the reliability and confidentiality of data because user information remains on the same device. The article also presents an approach for the analysis of large amounts of information using the algorithm of Singular Value Decomposition (SVD). This algorithm allows both to reduce the amount of information, discarding redundancy, and to predict events based on the identified patterns in the data. The main features of distributed data analysis, the possibility of using complex algorithms for information analysis, and machine learning in such systems are identified. However, the algorithm of Singular Value Decomposition is quite difficult to implement given the distributed architecture. To improve the efficiency of this method in distributed systems, a special modified FedSVD algorithm is proposed. Based on this algorithm, user data is collected from different devices, but the ability to further protect them from possible interference or interception is added. The results of the work can be used in the design of systems for data analysis, increasing the reliability of the user information used, including in corporate information systems, financial or IT areas, etc. The proposed approaches can serve as a basis for the development of information technology for automatic provision of recommendations to users, prediction of emergencies in enterprises.