Lung opacities are extremely important for physicians to monitor and can have irreversible consequences for patients if misdiagnosed or confused with other findings. Therefore, long-term monitoring of the regions of lung opacity is recommended by physicians. Tracking the regional dimensions of images and classifying differences from other lung cases can provide significant ease to physicians. Deep learning methods can be easily used for the detection, classification, and segmentation of lung opacity. In this study, a three-channel fusion CNN model is applied to effectively detect lung opacity on a balanced dataset compiled from public datasets. The MobileNetV2 architecture is used in the first channel, the InceptionV3 model in the second channel, and the VGG19 architecture in the third channel. The ResNet architecture is used for feature transfer from the previous layer to the current layer. In addition to being easy to implement, the proposed approach can also provide significant cost and time advantages to physicians. Our accuracy values for two, three, four, and five classes on the newly compiled dataset for lung opacity classifications are found to be 92.52%, 92.44%, 87.12%, and 91.71%, respectively.