LIDARs produce depth measurements, which are relatively sparse when compared with cameras. Current state-of-the-art solutions for increasing the density of LIDAR-derived depth maps rely on training the models for specific input measurement density. This assumption can easily be violated. The goal of this work was to develop a solution capable of producing reasonably accurate depth predictions while using input with a very wide range of depth information densities. To that end, we defined a WeaveBlock capable of efficiently propagating depth information. To achieve this goal, WeaveBlocks utilize long and narrow horizontal and vertical convolution kernels together with MobileNet-inspired pointwise convolutions serving as computational kernels. In this paper, we present the WeaveNet architecture for guided (LIDAR and camera) and unguided (LIDAR only) depth completion as well as a non-standard network training procedure. We present the results of the network on the KITTI test and validation sets. We analyze the network performance at various levels of input sparsity by randomly removing between 0% and 99% of the LIDAR points from the network inputs, and in each case, we obtain reasonable quality output. Additionally, we show that our trained network weights can easily be reused with a different LIDAR sensor.