Since its release, Kinect has been the de facto standard for low-cost RGB-D sensors. An infrared laser ray shot through an holographic diffraction grating projects a fixed dot pattern which is captured using an infrared camera. The pseudorandom pattern ensures that a simple block matching algorithm suffices to provide reliable depth estimates, allowing a cost-effective implementation. In this paper, we analyze the software limitations of Kinect's method, which allows us to propose algorithms that provide better precision. First, we analyze the dot pattern: we measure its pincushion distortion and its effect on the dot density, which is smaller towards the edges of the image. Then, we analyze the behavior of Block Matching algorithms, we show how Kinect's Block Matching implementation is; in general; limited by the dot density of the pattern, and a significant spatial bias is introduced as a result. We propose an efficient approach to estimate the disparity of each dot, allowing us to produce a point cloud with better spatial resolution than Block Matching algorithms.