Data analysis is very important for the success of any business today. It helps to optimize business processes, analyze users' behavior, demands etc. There are powerful data analytics tools, such as the ones of the Hadoop ecosystem, but they require multiple high-performance servers to run and highqualified experts to install, configure and support them. In most cases, small companies and start-ups could not afford such expenses. However, they can use them as web services, on demand, and pay much lower fees per request. To do that, companies should somehow share their data with an existing, already deployed, Hadoop cluster. The most common way of uploading their files to the Hadoop's Distributed File System (HDFS) is through the WebHDFS API (Application Programming Interface) that allows remote access to HDFS. For that reason, the API's throughput is very important for the efficient integration of a company's data to the Hadoop cluster. This paper performs a series of experimental analyses aiming to determine the WebHDFS API's throughput, if it is a bottleneck in integration of a company's data to existing Hadoop infrastructure and to detect all possible factors that influence the speed of data transmission between the clients' software and the Hadoop' file system.