Wheezing is one of the most prominent symptoms for pulmonary attack. Hence, wheezing detection has attracted a lot of attention in recent years. However, there is a dearth of a reliable method that can automatically detect wheezing events during each respiration phase in presence of several concurrent sounds such as cough, throat clearing, and nasal breathing. In this paper, we develop a model called WheezeD which, to the best of our knowledge, represents the first step towards developing a computational model for respiration phased based wheeze detection. WheezeD has two components, first, we d evelop a n a lgorithm t o d etect respiration p hase from audio data. We, then transform the audio into 2-D spectro-temporal image and develop a convolutional neural network (CNN) based wheeze detection model. We evaluate the model performance and compare them to conventional approaches. Experiments on a public dataset show that our model can identify wheezing event with an accuracy of 96.99%, specificity of 97.96%, and sensitivity of 96.08%, which is over 10% improvement in performance compared to the best accuracy reported in the literature by using traditional machine learning models on the same dataset. Moreover, we discuss how WheezeD may be used towards assessment and computation of patient severity.