Point clouds data are nowadays one of the major data sources for describing our environment. Recently, deep architectures have been proposed as a key step in understanding and retrieving semantic information. Despite the great contribution of deep learning in this field, the explainability of these models for 3D data is still fairly unexplored. Explainability, identified as a potential weakness of Deep Neural Networks (DNNs), can help researchers against skepticism, considering that these models are far from being self-explanatory. Although literature provides many examples on the exploitation of Explainable Artificial Intelligence (XAI) approaches with 2D data, only a few studies have attempted to investigate it for 3D DNNs. To overcome these limitations, it is here proposed BubblEX, a novel multimodal fusion framework to learn the 3D point features. BubblEX framework comprises two stages: "Visualization Module" for the visualisation of features learned from the network in its hidden layers and "Interpretability Module", which aims at describing how the neighbour points are involved in the features extraction. For our experiments, it has been used Dynamic Graph CNN (DGCNN), trained on Modelnet40 dataset. The developed framework extends a method for obtaining saliency maps from image data, to deal with 3D point cloud data, allowing the analysis, comparison and contrasting of multiple features. Besides, it permits the generation of visual explanations from any DNN-based network for 3D point cloud classification without requiring architectural changes or re-training. Our findings will be extremely useful for both scientists and non-experts in understanding and improving future AI-based models.