2023
DOI: 10.1117/1.jei.32.2.023039
|View full text |Cite
|
Sign up to set email alerts
|

Employing a hybrid model based on texture-biased convolutional neural networks and edge-biased vision transformers for anomaly detection of signal bonds

Abstract: The railway system of Japan plays a vital role in the national transportation network.A key issue in public transport safety is anomaly detection in railways. Lately, developing robust algorithms and methods for anomaly detection has become the premier task in this field. Recently introduced approaches based on convolutional neural networks, generative adversarial networks, and vision transformers (ViTs) have remarkably improved the research in anomaly detection. Our work proposes a high-performance module for… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…Transformers have become the foundation for many advanced language models, such as BERT, ChatGPT [23], and T5, and have significantly advanced the capabilities of language understanding and generation systems. Vision transformers (ViTs) [24] are an adaptation of the classical transformer architecture that apply self-attention mechanisms to process image data [25], making them an exemplary powerful model for tasks in computer vision, showcasing the extension of transformers' effectiveness beyond NLP. Figure 1 shows the relationship between AI, ML, DL, and Transformers.…”
mentioning
confidence: 99%
“…Transformers have become the foundation for many advanced language models, such as BERT, ChatGPT [23], and T5, and have significantly advanced the capabilities of language understanding and generation systems. Vision transformers (ViTs) [24] are an adaptation of the classical transformer architecture that apply self-attention mechanisms to process image data [25], making them an exemplary powerful model for tasks in computer vision, showcasing the extension of transformers' effectiveness beyond NLP. Figure 1 shows the relationship between AI, ML, DL, and Transformers.…”
mentioning
confidence: 99%