2022 IEEE 38th International Conference on Data Engineering (ICDE) 2022
DOI: 10.1109/icde53745.2022.00269
|View full text |Cite
|
Sign up to set email alerts
|

Spatial-Temporal Hypergraph Self-Supervised Learning for Crime Prediction

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(25 citation statements)
references
References 46 publications
0
25
0
Order By: Relevance
“…Contisciani et al (2022) proposed a statistical inference methods named Hypergraph-MT to identify communities with higher-order interactions and infer missing hyperedges. A spatiotemporal self-supervised hypergraph learning was proposed for city crime prediction (Li et al, 2022), which enables crime data enhancement and city-wide crime characterization. Since hypergraph learning has produced excellent outcomes for modeling data higher-order relationships, scholars have constructed variant models based on hypergraph structure for traffic prediction (Wang and Zhu, 2022).…”
Section: Hypergraph Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Contisciani et al (2022) proposed a statistical inference methods named Hypergraph-MT to identify communities with higher-order interactions and infer missing hyperedges. A spatiotemporal self-supervised hypergraph learning was proposed for city crime prediction (Li et al, 2022), which enables crime data enhancement and city-wide crime characterization. Since hypergraph learning has produced excellent outcomes for modeling data higher-order relationships, scholars have constructed variant models based on hypergraph structure for traffic prediction (Wang and Zhu, 2022).…”
Section: Hypergraph Learningmentioning
confidence: 99%
“…From a temporal perspective, we have similar conclusions from Section 3.2 that the occurrence of traffic accidents is cyclical and dependent on longer time scales. Based on the aforementioned issues, we employ a hypergraph learning architecture aspired by (Li et al, 2022), which consists of a long temporal encoder and a regional hypergraph spatial model for capturing cross-regional global dependencies at global scales.…”
Section: Global Dynamic Hypergraph Networkmentioning
confidence: 99%
“…Such diverse spatio-temporal data drives the need for effective spatio-temporal prediction frameworks for various urban sensing applications, such as traffic analysis [28], human mobility behavior modeling [15], and citywide crime prediction [8]. For instance, motivated by the opportunities of building machine learning and big data driven intelligent cities, the discovered human trajectory patterns can help to formulate better urban planning mechanisms [3], or understanding the dynamics of crime occurrences is useful for reducing crime rate [12,26].…”
Section: Introductionmentioning
confidence: 99%
“…The self-attention mechanism has also been employed and shown to be effective in modeling spatio-temporal dependency [9,57,58]. On the other hand, in the context of crime prediction, recurrent attentive networks are utilized to model complicated spatio-temporal crime patterns [15], while Hypergraph Neural Networks [47] and Self-Supervised Learning [26] have been employed to learn global spatio-temporal dependencies and address specific challenges in learning crime patterns. .…”
Section: Introductionmentioning
confidence: 99%
“…In addition to the limited number of samples, data missing problems often occur in real-world spatiotemporal applications due to various reasons, such as sensor failure in traffic scenarios and data privacy in epidemic forecasting. ii) Data Sparsity is another issue in some spatio-temporal forecasting tasks, such as crime prediction [26] and epidemic forecasting [42]. In these cases, the data of each fine-grained region or sensor can be sparse along the temporal dimension when compared to the whole urban space.…”
Section: Introductionmentioning
confidence: 99%