In the recent years, there has been a significant growth of interest in real-world systems based on deep neural networks (DNNs). These systems typically incorporate multiple DNNs running simultaneously. In this paper we propose a novel approach of multi-DNN execution on a single GPU using multiple CUDA contexts and TensorRT, state-of-the-art DNN inference framework. We show that it can lead to more efficient scheduling of multiple DNNs, especially in case when a lightweight and a heavy DNNs are inferred together. We show that our approach can provide an almost 7x increase in the throughput of a lightweight DNN at the cost of neglible throughput drop of a heavy DNN, compared to the baseline. Moreover, we compare two ways of improving throughput of a single DNN by processing multiple images together: standard batching and implicit batching by processing multiple images simultaneously using several TensorRT execution contexts. We show that meanwhile standard batching outperforms implicit batching at larger batch sizes, implicit batching can provide up to 43% more throughput for a smaller DNN using smaller batch size.