2021
DOI: 10.3390/asi4010003
|View full text |Cite
|
Sign up to set email alerts
|

Gas Detection and Identification Using Multimodal Artificial Intelligence Based Sensor Fusion

Abstract: With the rapid industrialization and technological advancements, innovative engineering technologies which are cost effective, faster and easier to implement are essential. One such area of concern is the rising number of accidents happening due to gas leaks at coal mines, chemical industries, home appliances etc. In this paper we propose a novel approach to detect and identify the gaseous emissions using the multimodal AI fusion techniques. Most of the gases and their fumes are colorless, odorless, and tastel… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
1
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
10

Relationship

1
9

Authors

Journals

citations
Cited by 53 publications
(24 citation statements)
references
References 29 publications
0
21
1
2
Order By: Relevance
“…CNNs are widely used and in complex visual recognition tasks such as action and activity recognition [ 41 ], anomaly detection and recognition [ 42 , 43 ], classification [ 44 , 45 ], object detection [ 46 ], and a variety of other recognition, video summarization, and segmentation tasks [ 41 49 ]. The CNN architecture consists of convolutional layers (CL), pooling layers, and fully connected layers.…”
Section: The Proposed Methodologymentioning
confidence: 99%
“…CNNs are widely used and in complex visual recognition tasks such as action and activity recognition [ 41 ], anomaly detection and recognition [ 42 , 43 ], classification [ 44 , 45 ], object detection [ 46 ], and a variety of other recognition, video summarization, and segmentation tasks [ 41 49 ]. The CNN architecture consists of convolutional layers (CL), pooling layers, and fully connected layers.…”
Section: The Proposed Methodologymentioning
confidence: 99%
“…In [ 5 ], Gros Dut presents an in-depth analysis of the logic underpinning data fusion and discusses data fusion and multisensor integration approaches. Narkhede et al [ 6 ] propose a method to detect gaseous emissions using multimodal data collected from gas sensors and thermal cameras. The fused model achieved 96% accuracy on the testing set instead of 82% on LSTM applied to sensor data and 93% on CNN applied to camera images for individual modalities.…”
Section: Introductionmentioning
confidence: 99%
“…Surpassing human-level performance propelled the research in applications where different modalities amongst language, vision, sensory, text play an essential role inaccurate predictions and identification [ 45 ]. Several state-of-the-art approaches in multimodal fusion employing deep learning models are proposed in the literature, such as approaches presented by F. Ramzan et al [ 24 ], A. Zlatintsi et al [ 25 ], M. Dhouib and S. Masmoudi [ 26 ], Y. D. Zhang et al [ 27 ], C. Devaguptapu et al [ 28 ], and P. Narkhede et al [ 46 ]. The purpose of these approaches is to enhance the multimodal fusion method with which the objects can be efficiently detected from static images or given video sequences with the preferable use of the deep learning library.…”
Section: Theoretical Backgroundmentioning
confidence: 99%