Proceedings of the 14th ACM International Conference on Web Search and Data Mining 2021
DOI: 10.1145/3437963.3441704
|View full text |Cite
|
Sign up to set email alerts
|

FinSense: An Assistant System for Financial Journalists and Investors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 8 publications
0
3
0
Order By: Relevance
“…In our previous work (Liou et al, 2021), we apply the proposed task to accelerate the working process of the journalists and show that using the extracted entities could be useful for downstream tasks such as news aggregation and stock movement prediction. In the future, we plan to apply the proposed approach to datasets with both graphical knowledge and textual content.…”
Section: Discussionmentioning
confidence: 99%
“…In our previous work (Liou et al, 2021), we apply the proposed task to accelerate the working process of the journalists and show that using the extracted entities could be useful for downstream tasks such as news aggregation and stock movement prediction. In the future, we plan to apply the proposed approach to datasets with both graphical knowledge and textual content.…”
Section: Discussionmentioning
confidence: 99%
“…A homogeneous graph is a graph with one type of node and one type of edge. For instance, Liou et al (2021) construct a financial news co-occurrence graph, where two companies are connected if they are tagged in the same news articles. Li et al (2020b) build a transaction network, where nodes are accounts and are connected when there exist transactions between them.…”
Section: Homogeneous Graphmentioning
confidence: 99%
“…Taking the contexts into account, language models achieve the state of art performance on many NLP tasks and are widely used in the literature. For instance, Liou et al (2021) use bidirectional encoder representations from transformers (BERT) model (Devlin et al, 2019) to encode the entire news article and generated a news embedding. Without modifying the model architecture, the pretrained BERT model is able to be fine-tuned and produce state-of-art performance.…”
Section: Language Modelmentioning
confidence: 99%