A novel approach for inferring depth measurements via multispectralactive depth from defocus and deep learning has been designed,implemented, and successfully tested. The scene is activelyilluminated with a multispectral quasi-random point pattern,and a conventional RGB camera is used to acquire images of theprojected pattern. The projection points in the captured image ofthe projected pattern are analyzed using an ensemble of deep neuralnetworks to estimate the depth at each projection point. A finaldepth map is then reconstructed algorithmically based on the pointdepth estimates. Experiments using different test scenes with differentstructural characteristics show that the proposed approachcan produced improved depth maps compared to prior deep learningapproaches using monospectral projection patterns.